diff --git a/PyTorch/built-in/nlp/Bert-Squad_ID0470_for_PyTorch/README.md b/PyTorch/built-in/nlp/Bert-Squad_ID0470_for_PyTorch/README.md index c6d77a73cbfede5c885e3efde41e569e4defbb45..7ceccd0e0025510d4aa0f21c94b119aa576f17e6 100644 --- a/PyTorch/built-in/nlp/Bert-Squad_ID0470_for_PyTorch/README.md +++ b/PyTorch/built-in/nlp/Bert-Squad_ID0470_for_PyTorch/README.md @@ -39,11 +39,9 @@ BERT-Large模型是一个24层,1024维,24个自注意头(self attention he | Torch_Version | 三方库依赖版本 | | :--------: | :----------------------------------------------------------: | - | PyTorch 1.5 | - | - | PyTorch 1.8 | - | | PyTorch 1.11 | - | | PyTorch 2.1 | - | - + - 环境准备指导。 请参考《[Pytorch框架训练环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/ptes)》。 @@ -137,7 +135,7 @@ BERT-Large模型是一个24层,1024维,24个自注意头(self attention he ``` bash test/train_large_full_8p.sh --data_path=/xxx/v1.1 --ckpt_path=real_path # bert-large 8卡精度 - + bash test/train_base_full_8p.sh --data_path=/xxx/v1.1 --ckpt_path=real_path # bert-base 8卡精度 bash test/train_base_performance_8p.sh --data_path=/xxx/v1.1 --ckpt_path=real_path # bert-base 8卡性能 diff --git a/PyTorch/contrib/audio/wav2vec2.0/README.md b/PyTorch/contrib/audio/wav2vec2.0/README.md index 706b8f84275af2018ca5d3e27e72bef369af4d5b..d85334ba834ff2db7bfb6de2e950155b3eb3155f 100644 --- a/PyTorch/contrib/audio/wav2vec2.0/README.md +++ b/PyTorch/contrib/audio/wav2vec2.0/README.md @@ -36,10 +36,8 @@ Wav2vec2.0是Meta在2020年发表的无监督语音预训练模型。它的核 | Torch_Version | 三方库依赖版本 | | :--------: | :----------------------------------------------------------: | - | PyTorch 1.5 | - | - | PyTorch 1.8 | - | | PyTorch 1.11 | - | - + - 环境准备指导。 请参考《[Pytorch框架训练环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/ptes)》。 @@ -131,7 +129,7 @@ Wav2vec2.0是Meta在2020年发表的无监督语音预训练模型。它的核 --config-dir //配置文件路径 --config-name //配置文件名称 ``` - + 训练完成后,权重文件保存在当前路径下,并输出模型训练精度和性能信息。