diff --git a/PyTorch/contrib/cv/classification/MAE_for_PyTorch/README.md b/PyTorch/contrib/cv/classification/MAE_for_PyTorch/README.md index 3919fc9430656c6308cc649dc211ba7e9d0a4c37..d0c1281073da775bcb837bfa32ccf8408bf33c88 100644 --- a/PyTorch/contrib/cv/classification/MAE_for_PyTorch/README.md +++ b/PyTorch/contrib/cv/classification/MAE_for_PyTorch/README.md @@ -172,13 +172,13 @@ MAE的设计虽然简单,但已被证明是一个强大的、可扩展的视 # 输出完整微调日志./output_finetune_full_8p/Atlas_8p_finetune_full.log、总结性日志./output_finetune_full_8p/log.txt bash ./test/finetune_full_8p.sh --data_path=real_data_path --finetune_pth=pretrained_model_path - # fine-tuning_large 8p performance,8p上运行1个epoch,910运行时间约为14min + # fine-tuning_large 8p performance,8p上运行1个epoch bash ./test/finetune_performance_large_8p.sh --data_path=real_data_path --finetune_pth=pretrained_model_path - # fine-tuning_large 8p full,8p上运行50个epoch,910运行时间约为12h + # fine-tuning_large 8p full,8p上运行50个epoch bash ./test/finetune_full_large_8p.sh --data_path=real_data_path --finetune_pth=pretrained_model_path - # fine-tuning_large 16p full,16p上运行50个epoch,910运行时间约为6h + # fine-tuning_large 16p full,16p上运行50个epoch bash ./test/finetune_full_large_16p.sh --data_path=real_data_path --finetune_pth=pretrained_model_path # 8p Base_eval,运行时间约为3min @@ -186,7 +186,7 @@ MAE的设计虽然简单,但已被证明是一个强大的、可扩展的视 bash ./test/finetune_eval_8p.sh --data_path=real_data_path --resume_pth=finetuned_model_path ``` - 说明:MAE-Large配置在910上进行训练,16p脚本需要在启动脚本修改节点IP + 说明:16p脚本需要在启动脚本修改节点IP 模型训练脚本参数说明如下。 @@ -229,9 +229,7 @@ MAE的设计虽然简单,但已被证明是一个强大的、可扩展的视 | 8p-NPU | 83.86 | 1603 | 50 | O2 | 1.8 | | 16p-NPU | 85.97 | 3145 | 50 | O2 | 1.8 | -说明1:MAE-Large配置在910上进行训练 - -说明2:上表为历史数据,仅供参考。2025年5月10日更新的性能数据如下: +说明:上表为历史数据,仅供参考。2025年5月10日更新的性能数据如下: | NAME | 精度类型 | FPS | | :------ |:-------:|:------:| | 8p-竞品 | FP16 | 3538.01 |