diff --git a/docs/mindformers/docs/source_en/advanced_development/performance_optimization.md b/docs/mindformers/docs/source_en/advanced_development/performance_optimization.md index 96b262ee10fffd67f68ed1594d532f32d262152b..506a950ec8b4315d35fbd185a8d1f01f0d5fa88f 100644 --- a/docs/mindformers/docs/source_en/advanced_development/performance_optimization.md +++ b/docs/mindformers/docs/source_en/advanced_development/performance_optimization.md @@ -328,7 +328,7 @@ Large model training performance tuning requires simultaneous consideration of m MindSpore provides SAPP (Symbolic Automatic Parallel Planner) automatic load balancing tool. Inputting the model memory and time information, as well as some of the pipeline parallel performance-related hyper-references (e.g., the impact of recomputation on performance), the tool will construct the linear programming problem by itself, through the global solution, automatically generate stage-layer ratios in the pipeline parallel for the large model, adjust the recalculation strategy of each layer, automatically optimize the cluster arithmetic power and memory utilization, reduce the idle waiting time, realize the Pipeline parallel minute-level strategy optimization, greatly reduce the performance tuning cost, and significantly improve the end-to-end training performance. -For detailed usage, please refer to [SAPP Pipelined Load Balancing](https://gitee.com/mindspore/toolkits/tree/master/autoparallel/pipeline_balance) tool introduction. +For detailed usage, please refer to [SAPP Pipelined Load Balancing](https://gitee.com/mindspore/toolkits/tree/master/perftool/autoparallel/pipeline_balance) tool introduction. ## Overall Concept diff --git a/docs/mindformers/docs/source_en/guide/supervised_fine_tuning.md b/docs/mindformers/docs/source_en/guide/supervised_fine_tuning.md index 36523b3f5b078d5fba17a9bbe4c1fff939ccb69e..89826861f37f258ebabe5bbc7e554d8ad6a8992e 100644 --- a/docs/mindformers/docs/source_en/guide/supervised_fine_tuning.md +++ b/docs/mindformers/docs/source_en/guide/supervised_fine_tuning.md @@ -64,7 +64,7 @@ This guide uses [llm-wizard/alpaca-gpt4-data](https://huggingface.co/datasets/ll #### Single-NPU Training -First, prepare the configuration file. This guide provides a fine-tuning configuration file for the Qwen2.5-7B model, `finetune_qwen2_5_7b_8k_1p.yaml`, available for download from the [Gitee repository](https://gitee.com/mindspore/docs/tree/master/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning/finetune_qwen2_5_7b_8k_1p.yaml). +First, prepare the configuration file. This guide provides a fine-tuning configuration file for the Qwen2.5-7B model, `finetune_qwen2_5_7b_8k_1p.yaml`, available for download from the [Gitee repository](https://gitee.com/mindspore/docs/tree/master/docs/mindformers/docs/source_zh_cn/example/supervised_fine_tuning/finetune_qwen2_5_7b_8k_1p.yaml). > Due to limited single-NPU memory, the `num_layers` in the configuration file is set to 4, used as an example only. @@ -104,7 +104,7 @@ run_mode: Running mode, train: training, finetune: fine-tuning, predict #### Single-Node Training -First, prepare the configuration file. This guide provides a fine-tuning configuration file for the Qwen2.5-7B model, `finetune_qwen2_5_7b_8k.yaml`, available for download from the [Gitee repository](https://gitee.com/mindspore/docs/tree/master/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning/finetune_qwen2_5_7b_8k.yaml). +First, prepare the configuration file. This guide provides a fine-tuning configuration file for the Qwen2.5-7B model, `finetune_qwen2_5_7b_8k.yaml`, available for download from the [Gitee repository](https://gitee.com/mindspore/docs/tree/master/docs/mindformers/docs/source_zh_cn/example/supervised_fine_tuning/finetune_qwen2_5_7b_8k.yaml). Then, modify the parameters in the configuration file based on actual conditions, mainly including: diff --git a/docs/mindformers/docs/source_zh_cn/advanced_development/performance_optimization.md b/docs/mindformers/docs/source_zh_cn/advanced_development/performance_optimization.md index eea05bf4b514cfe237f368afd3892bf870dbb5ef..4ae64cc316de34c04b8823916f91733fa943155e 100644 --- a/docs/mindformers/docs/source_zh_cn/advanced_development/performance_optimization.md +++ b/docs/mindformers/docs/source_zh_cn/advanced_development/performance_optimization.md @@ -328,7 +328,7 @@ context: MindSpore提供了SAPP(Symbolic Automatic Parallel Planner)自动负载均衡工具。只需输入模型的内存和时间信息,以及部分流水线并行性能相关的超参(如重计算对性能的影响),工具将自行构建线性规划问题,通过全局求解的方式,为大模型自动生成流水线并行中的stage-layer配比,调整各layer重计算策略,自动优化集群算力和内存利用率,降低空等时间,实现Pipeline并行分钟级策略寻优,大幅度降低性能调优成本,显著提升端到端训练性能。 -详细使用方法,请参考[SAPP流水线负载均衡](https://gitee.com/mindspore/toolkits/tree/master/autoparallel/pipeline_balance)工具介绍。 +详细使用方法,请参考[SAPP流水线负载均衡](https://gitee.com/mindspore/toolkits/tree/master/perftool/autoparallel/pipeline_balance)工具介绍。 ## 整体思路 diff --git a/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md b/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md index da12f3194cb796a43dff7b9190d0348a1b0581aa..98b3449ea9060ebd6c9db3c7b3625e2ffc32071f 100644 --- a/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md +++ b/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md @@ -64,7 +64,7 @@ MindSpore Transformers提供在线加载Hugging Face数据集的能力,详细 #### 单卡训练 -首先准备配置文件,本实践流程以Qwen2.5-7B模型为例,提供了一个微调配置文件`finetune_qwen2_5_7b_8k_1p.yaml`,可以在[gitee仓库](https://gitee.com/mindspore/docs/tree/master/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning/finetune_qwen2_5_7b_8k_1p.yaml)下载。 +首先准备配置文件,本实践流程以Qwen2.5-7B模型为例,提供了一个微调配置文件`finetune_qwen2_5_7b_8k_1p.yaml`,可以在[gitee仓库](https://gitee.com/mindspore/docs/tree/master/docs/mindformers/docs/source_zh_cn/example/supervised_fine_tuning/finetune_qwen2_5_7b_8k_1p.yaml)下载。 > 由于单卡显存有限,配置文件中的`num_layers`被设置为了4,仅作为示例使用。 @@ -106,7 +106,7 @@ run_mode: 运行模式,train:训练,finetune:微调,predict #### 单机训练 -首先准备配置文件,本实践流程以Qwen2.5-7B模型为例,提供了一个微调配置文件`finetune_qwen2_5_7b_8k.yaml`,可以在[gitee仓库](https://gitee.com/mindspore/docs/tree/master/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning/finetune_qwen2_5_7b_8k.yaml)下载。 +首先准备配置文件,本实践流程以Qwen2.5-7B模型为例,提供了一个微调配置文件`finetune_qwen2_5_7b_8k.yaml`,可以在[gitee仓库](https://gitee.com/mindspore/docs/tree/master/docs/mindformers/docs/source_zh_cn/example/supervised_fine_tuning/finetune_qwen2_5_7b_8k.yaml)下载。 然后根据实际情况修改配置文件中的参数,主要包括: