From 21962286385ac05684b71af9b619a89e5347d003 Mon Sep 17 00:00:00 2001 From: huan <3174348550@qq.com> Date: Mon, 14 Jul 2025 11:17:58 +0800 Subject: [PATCH] modify error links --- .../docs/source_en/guide/supervised_fine_tuning.md | 4 ++-- .../convert_ckpt_to_megatron/convert_ckpt_to_megatron.md | 2 +- .../docs/source_zh_cn/guide/supervised_fine_tuning.md | 4 ++-- .../source_en/model_infer/ms_infer/llm_inference_overview.md | 2 +- .../model_infer/ms_infer/llm_inference_overview.md | 2 +- 5 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/mindformers/docs/source_en/guide/supervised_fine_tuning.md b/docs/mindformers/docs/source_en/guide/supervised_fine_tuning.md index 9bc5ccf9b7..bb33073212 100644 --- a/docs/mindformers/docs/source_en/guide/supervised_fine_tuning.md +++ b/docs/mindformers/docs/source_en/guide/supervised_fine_tuning.md @@ -64,7 +64,7 @@ This guide uses [llm-wizard/alpaca-gpt4-data](https://huggingface.co/datasets/ll #### Single-NPU Training -First, prepare the configuration file. This guide provides a fine-tuning configuration file for the Qwen2.5-7B model, `finetune_qwen2_5_7b_8k_1p.yaml`, available for download from the [Gitee repository](https://gitee.com/mindspore/docs/tree/r2.7.0rc1/docs/mindformers/docs/source_zh_cn/example/supervised_fine_tuning/finetune_qwen2_5_7b_8k_1p.yaml). +First, prepare the configuration file. This guide provides a fine-tuning configuration file for the Qwen2.5-7B model, `finetune_qwen2_5_7b_8k_1p.yaml`, available for download from the [Gitee repository](https://gitee.com/mindspore/docs/blob/r2.7.0rc1/docs/mindformers/docs/source_zh_cn/example/supervised_fine_tuning/finetune_qwen2_5_7b_8k_1p.yaml). > Due to limited single-NPU memory, the `num_layers` in the configuration file is set to 4, used as an example only. @@ -104,7 +104,7 @@ run_mode: Running mode, train: training, finetune: fine-tuning, predict #### Single-Node Training -First, prepare the configuration file. This guide provides a fine-tuning configuration file for the Qwen2.5-7B model, `finetune_qwen2_5_7b_8k.yaml`, available for download from the [Gitee repository](https://gitee.com/mindspore/docs/tree/r2.7.0rc1/docs/mindformers/docs/source_zh_cn/example/supervised_fine_tuning/finetune_qwen2_5_7b_8k.yaml). +First, prepare the configuration file. This guide provides a fine-tuning configuration file for the Qwen2.5-7B model, `finetune_qwen2_5_7b_8k.yaml`, available for download from the [Gitee repository](https://gitee.com/mindspore/docs/blob/r2.7.0rc1/docs/mindformers/docs/source_zh_cn/example/supervised_fine_tuning/finetune_qwen2_5_7b_8k.yaml). Then, modify the parameters in the configuration file based on actual conditions, mainly including: diff --git a/docs/mindformers/docs/source_zh_cn/example/convert_ckpt_to_megatron/convert_ckpt_to_megatron.md b/docs/mindformers/docs/source_zh_cn/example/convert_ckpt_to_megatron/convert_ckpt_to_megatron.md index 4fbb4bf550..d2ee7cf99d 100644 --- a/docs/mindformers/docs/source_zh_cn/example/convert_ckpt_to_megatron/convert_ckpt_to_megatron.md +++ b/docs/mindformers/docs/source_zh_cn/example/convert_ckpt_to_megatron/convert_ckpt_to_megatron.md @@ -14,7 +14,7 @@ git clone https://github.com/NVIDIA/Megatron-LM.git -b core_r0.12.0 ``` -2. 拷贝[转换脚本](https://gitee.com/mindspore/docs/tree/r2.7.0rc1/docs/mindformers/docs/source_zh_cn/example/convert_ckpt_to_megatron/convert_ckpt_to_megatron/loader_core_mf.py)到 Megatron-LM/tools/checkpoint/ 目录下。 +2. 拷贝[转换脚本](https://gitee.com/mindspore/docs/blob/r2.7.0rc1/docs/mindformers/docs/source_zh_cn/example/convert_ckpt_to_megatron/convert_ckpt_to_megatron/loader_core_mf.py)到 Megatron-LM/tools/checkpoint/ 目录下。 ## 模型权重准备 diff --git a/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md b/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md index 6184d32fe2..15ed443a5b 100644 --- a/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md +++ b/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md @@ -64,7 +64,7 @@ MindSpore Transformers提供在线加载Hugging Face数据集的能力,详细 #### 单卡训练 -首先准备配置文件,本实践流程以Qwen2.5-7B模型为例,提供了一个微调配置文件`finetune_qwen2_5_7b_8k_1p.yaml`,可以在[gitee仓库](https://gitee.com/mindspore/docs/tree/r2.7.0rc1/docs/mindformers/docs/source_zh_cn/example/supervised_fine_tuning/finetune_qwen2_5_7b_8k_1p.yaml)下载。 +首先准备配置文件,本实践流程以Qwen2.5-7B模型为例,提供了一个微调配置文件`finetune_qwen2_5_7b_8k_1p.yaml`,可以在[gitee仓库](https://gitee.com/mindspore/docs/blob/r2.7.0rc1/docs/mindformers/docs/source_zh_cn/example/supervised_fine_tuning/finetune_qwen2_5_7b_8k_1p.yaml)下载。 > 由于单卡显存有限,配置文件中的`num_layers`被设置为了4,仅作为示例使用。 @@ -106,7 +106,7 @@ run_mode: 运行模式,train:训练,finetune:微调,predict #### 单机训练 -首先准备配置文件,本实践流程以Qwen2.5-7B模型为例,提供了一个微调配置文件`finetune_qwen2_5_7b_8k.yaml`,可以在[gitee仓库](https://gitee.com/mindspore/docs/tree/r2.7.0rc1/docs/mindformers/docs/source_zh_cn/example/supervised_fine_tuning/finetune_qwen2_5_7b_8k.yaml)下载。 +首先准备配置文件,本实践流程以Qwen2.5-7B模型为例,提供了一个微调配置文件`finetune_qwen2_5_7b_8k.yaml`,可以在[gitee仓库](https://gitee.com/mindspore/docs/blob/r2.7.0rc1/docs/mindformers/docs/source_zh_cn/example/supervised_fine_tuning/finetune_qwen2_5_7b_8k.yaml)下载。 然后根据实际情况修改配置文件中的参数,主要包括: diff --git a/tutorials/source_en/model_infer/ms_infer/llm_inference_overview.md b/tutorials/source_en/model_infer/ms_infer/llm_inference_overview.md index 3d883b9bf6..ee8a88d002 100644 --- a/tutorials/source_en/model_infer/ms_infer/llm_inference_overview.md +++ b/tutorials/source_en/model_infer/ms_infer/llm_inference_overview.md @@ -146,7 +146,7 @@ config = "/path/to/llama2_7b.yaml" model = AutoModel.from_config(config) ``` -In this code, tokenizer.model is a tokenizer file downloaded along with the weights from the Hugging Face official website, containing the token mapping table, while config is the model configuration file from MindFormers, which includes the relevant parameters for running the Llama2 model. You can obtain the sample from [predict_llama2_7b.yaml](https://gitee.com/mindspore/mindformers/blob/r1.5.0/configs/llama2/predict_llama2_7b.yaml) (Note: Change the CKPT weight path to the actual weight path). For details, see [Llama 2](https://gitee.com/mindspore/mindformers/blob/r1.5.0/docs/model_cards/llama2.md#-18). +In this code, tokenizer.model is a tokenizer file downloaded along with the weights from the Hugging Face official website, containing the token mapping table, while config is the model configuration file from MindFormers, which includes the relevant parameters for running the Llama2 model. In addition, if you have special requirements for the model or have a deep understanding of deep learning, you can build your own model. For details, see [Model Development](./model_dev.md). diff --git a/tutorials/source_zh_cn/model_infer/ms_infer/llm_inference_overview.md b/tutorials/source_zh_cn/model_infer/ms_infer/llm_inference_overview.md index 1e14cb04c5..30c82f359d 100644 --- a/tutorials/source_zh_cn/model_infer/ms_infer/llm_inference_overview.md +++ b/tutorials/source_zh_cn/model_infer/ms_infer/llm_inference_overview.md @@ -146,7 +146,7 @@ config = "/path/to/llama2_7b.yaml" model = AutoModel.from_config(config) ``` -其中,tokenizer.model是从Hugging Face官网下载的权重文件中附带的tokenizer文件,里面记录了tokens的映射表;config是MindFormers的模型配置文件,其中包含了Llama2模型运行的相关参数,样例可以在[predict_llama2_7b.yaml](https://gitee.com/mindspore/mindformers/blob/r1.6.0/configs/llama2/predict_llama2_7b.yaml)获取(注意:需要将ckpt权重路径改为实际的权重路径)。更详细的教程可以在[Llama 2](https://gitee.com/mindspore/mindformers/blob/dev/docs/model_cards/llama2.md#-18)获取。 +其中,tokenizer.model是从Hugging Face官网下载的权重文件中附带的tokenizer文件,里面记录了tokens的映射表;config是MindFormers的模型配置文件,其中包含了Llama2模型运行的相关参数。 此外,如果用户对于模型有自己的特殊需求,或者对深度学习有较深认识,也可以选择自己构建模型,详细教程见[从零构建大语言模型推理网络](./model_dev.md)。 -- Gitee