From da80d6627ca70a44a0e3ea1f9d8331bd22aaa35e Mon Sep 17 00:00:00 2001 From: Xinrui Chen Date: Mon, 23 Jun 2025 16:43:14 +0800 Subject: [PATCH] [MindFormers] Fix unavailable links --- .../advanced_development/performance_optimization.md | 2 +- .../docs/source_en/guide/supervised_fine_tuning.md | 4 ++-- .../advanced_development/performance_optimization.md | 2 +- .../docs/source_zh_cn/guide/supervised_fine_tuning.md | 4 ++-- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/mindformers/docs/source_en/advanced_development/performance_optimization.md b/docs/mindformers/docs/source_en/advanced_development/performance_optimization.md index 96b262ee10..506a950ec8 100644 --- a/docs/mindformers/docs/source_en/advanced_development/performance_optimization.md +++ b/docs/mindformers/docs/source_en/advanced_development/performance_optimization.md @@ -328,7 +328,7 @@ Large model training performance tuning requires simultaneous consideration of m MindSpore provides SAPP (Symbolic Automatic Parallel Planner) automatic load balancing tool. Inputting the model memory and time information, as well as some of the pipeline parallel performance-related hyper-references (e.g., the impact of recomputation on performance), the tool will construct the linear programming problem by itself, through the global solution, automatically generate stage-layer ratios in the pipeline parallel for the large model, adjust the recalculation strategy of each layer, automatically optimize the cluster arithmetic power and memory utilization, reduce the idle waiting time, realize the Pipeline parallel minute-level strategy optimization, greatly reduce the performance tuning cost, and significantly improve the end-to-end training performance. -For detailed usage, please refer to [SAPP Pipelined Load Balancing](https://gitee.com/mindspore/toolkits/tree/master/autoparallel/pipeline_balance) tool introduction. +For detailed usage, please refer to [SAPP Pipelined Load Balancing](https://gitee.com/mindspore/toolkits/tree/master/perftool/autoparallel/pipeline_balance) tool introduction. ## Overall Concept diff --git a/docs/mindformers/docs/source_en/guide/supervised_fine_tuning.md b/docs/mindformers/docs/source_en/guide/supervised_fine_tuning.md index 36523b3f5b..89826861f3 100644 --- a/docs/mindformers/docs/source_en/guide/supervised_fine_tuning.md +++ b/docs/mindformers/docs/source_en/guide/supervised_fine_tuning.md @@ -64,7 +64,7 @@ This guide uses [llm-wizard/alpaca-gpt4-data](https://huggingface.co/datasets/ll #### Single-NPU Training -First, prepare the configuration file. This guide provides a fine-tuning configuration file for the Qwen2.5-7B model, `finetune_qwen2_5_7b_8k_1p.yaml`, available for download from the [Gitee repository](https://gitee.com/mindspore/docs/tree/master/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning/finetune_qwen2_5_7b_8k_1p.yaml). +First, prepare the configuration file. This guide provides a fine-tuning configuration file for the Qwen2.5-7B model, `finetune_qwen2_5_7b_8k_1p.yaml`, available for download from the [Gitee repository](https://gitee.com/mindspore/docs/tree/master/docs/mindformers/docs/source_zh_cn/example/supervised_fine_tuning/finetune_qwen2_5_7b_8k_1p.yaml). > Due to limited single-NPU memory, the `num_layers` in the configuration file is set to 4, used as an example only. @@ -104,7 +104,7 @@ run_mode: Running mode, train: training, finetune: fine-tuning, predict #### Single-Node Training -First, prepare the configuration file. This guide provides a fine-tuning configuration file for the Qwen2.5-7B model, `finetune_qwen2_5_7b_8k.yaml`, available for download from the [Gitee repository](https://gitee.com/mindspore/docs/tree/master/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning/finetune_qwen2_5_7b_8k.yaml). +First, prepare the configuration file. This guide provides a fine-tuning configuration file for the Qwen2.5-7B model, `finetune_qwen2_5_7b_8k.yaml`, available for download from the [Gitee repository](https://gitee.com/mindspore/docs/tree/master/docs/mindformers/docs/source_zh_cn/example/supervised_fine_tuning/finetune_qwen2_5_7b_8k.yaml). Then, modify the parameters in the configuration file based on actual conditions, mainly including: diff --git a/docs/mindformers/docs/source_zh_cn/advanced_development/performance_optimization.md b/docs/mindformers/docs/source_zh_cn/advanced_development/performance_optimization.md index eea05bf4b5..4ae64cc316 100644 --- a/docs/mindformers/docs/source_zh_cn/advanced_development/performance_optimization.md +++ b/docs/mindformers/docs/source_zh_cn/advanced_development/performance_optimization.md @@ -328,7 +328,7 @@ context: MindSpore提供了SAPP(Symbolic Automatic Parallel Planner)自动负载均衡工具。只需输入模型的内存和时间信息,以及部分流水线并行性能相关的超参(如重计算对性能的影响),工具将自行构建线性规划问题,通过全局求解的方式,为大模型自动生成流水线并行中的stage-layer配比,调整各layer重计算策略,自动优化集群算力和内存利用率,降低空等时间,实现Pipeline并行分钟级策略寻优,大幅度降低性能调优成本,显著提升端到端训练性能。 -详细使用方法,请参考[SAPP流水线负载均衡](https://gitee.com/mindspore/toolkits/tree/master/autoparallel/pipeline_balance)工具介绍。 +详细使用方法,请参考[SAPP流水线负载均衡](https://gitee.com/mindspore/toolkits/tree/master/perftool/autoparallel/pipeline_balance)工具介绍。 ## 整体思路 diff --git a/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md b/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md index da12f3194c..98b3449ea9 100644 --- a/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md +++ b/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md @@ -64,7 +64,7 @@ MindSpore Transformers提供在线加载Hugging Face数据集的能力,详细 #### 单卡训练 -首先准备配置文件,本实践流程以Qwen2.5-7B模型为例,提供了一个微调配置文件`finetune_qwen2_5_7b_8k_1p.yaml`,可以在[gitee仓库](https://gitee.com/mindspore/docs/tree/master/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning/finetune_qwen2_5_7b_8k_1p.yaml)下载。 +首先准备配置文件,本实践流程以Qwen2.5-7B模型为例,提供了一个微调配置文件`finetune_qwen2_5_7b_8k_1p.yaml`,可以在[gitee仓库](https://gitee.com/mindspore/docs/tree/master/docs/mindformers/docs/source_zh_cn/example/supervised_fine_tuning/finetune_qwen2_5_7b_8k_1p.yaml)下载。 > 由于单卡显存有限,配置文件中的`num_layers`被设置为了4,仅作为示例使用。 @@ -106,7 +106,7 @@ run_mode: 运行模式,train:训练,finetune:微调,predict #### 单机训练 -首先准备配置文件,本实践流程以Qwen2.5-7B模型为例,提供了一个微调配置文件`finetune_qwen2_5_7b_8k.yaml`,可以在[gitee仓库](https://gitee.com/mindspore/docs/tree/master/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning/finetune_qwen2_5_7b_8k.yaml)下载。 +首先准备配置文件,本实践流程以Qwen2.5-7B模型为例,提供了一个微调配置文件`finetune_qwen2_5_7b_8k.yaml`,可以在[gitee仓库](https://gitee.com/mindspore/docs/tree/master/docs/mindformers/docs/source_zh_cn/example/supervised_fine_tuning/finetune_qwen2_5_7b_8k.yaml)下载。 然后根据实际情况修改配置文件中的参数,主要包括: -- Gitee