From 78dc56dcb25cafc9770a3b98ef3c3377b20eb28a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=AE=A6=E6=99=93=E7=8E=B2?= <3174348550@qq.com> Date: Wed, 3 Sep 2025 17:32:42 +0800 Subject: [PATCH] modify error anchors --- .../mindformers/docs/source_en/example/distilled/distilled.md | 2 +- .../docs/source_en/guide/supervised_fine_tuning.md | 4 ++-- .../docs/source_zh_cn/example/distilled/distilled.md | 2 +- .../docs/source_zh_cn/guide/supervised_fine_tuning.md | 4 ++-- tutorials/source_en/beginner/introduction.md | 2 +- 5 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/mindformers/docs/source_en/example/distilled/distilled.md b/docs/mindformers/docs/source_en/example/distilled/distilled.md index 024710c565..3adaa6db64 100644 --- a/docs/mindformers/docs/source_en/example/distilled/distilled.md +++ b/docs/mindformers/docs/source_en/example/distilled/distilled.md @@ -227,7 +227,7 @@ python toolkit/data_preprocess/huggingface/datasets_preprocess.py \ The processed dataset is stored in `packed_data` and is in the arrow format. -For more information, see [MindSpore Transformers official documentation > Dataset](https://www.mindspore.cn/mindformers/docs/en/master/feature/dataset.html#custom-data-handler). +For more information, see [MindSpore Transformers official documentation > Dataset](https://www.mindspore.cn/mindformers/docs/en/master/feature/dataset.html#custom-processing). ##### Option 2: Using converted data diff --git a/docs/mindformers/docs/source_en/guide/supervised_fine_tuning.md b/docs/mindformers/docs/source_en/guide/supervised_fine_tuning.md index a09b5f75a1..ed6c548cb7 100644 --- a/docs/mindformers/docs/source_en/guide/supervised_fine_tuning.md +++ b/docs/mindformers/docs/source_en/guide/supervised_fine_tuning.md @@ -18,7 +18,7 @@ Before fine-tuning, the weight files of the pre-trained model need to be prepare ### 2. Dataset Preparation -MindSpore Transformers currently supports datasets in [Hugging Face format](https://www.mindspore.cn/mindformers/docs/en/master/feature/dataset.html#huggingface-datasets) and [MindRecord format](https://www.mindspore.cn/mindformers/docs/en/master/feature/dataset.html#mindrecord-dataset) for the fine-tuning phase. Users can prepare data according to task requirements. +MindSpore Transformers currently supports datasets in [Hugging Face format](https://www.mindspore.cn/mindformers/docs/en/master/feature/dataset.html#hugging-face-dataset) and [MindRecord format](https://www.mindspore.cn/mindformers/docs/en/master/feature/dataset.html#mindrecord-dataset) for the fine-tuning phase. Users can prepare data according to task requirements. ### 3. Configuration File Preparation @@ -52,7 +52,7 @@ MindSpore Transformers supports loading Hugging Face model weights, enabling dir ### Dataset Preparation -MindSpore Transformers supports online loading of Hugging Face datasets. For details, refer to [MindSpore Transformers-Dataset-Hugging Face Dataset](https://www.mindspore.cn/mindformers/docs/en/master/feature/dataset.html#huggingface-datasets). +MindSpore Transformers supports online loading of Hugging Face datasets. For details, refer to [MindSpore Transformers-Dataset-Hugging Face Dataset](https://www.mindspore.cn/mindformers/docs/en/master/feature/dataset.html#hugging-face-dataset). This guide uses [llm-wizard/alpaca-gpt4-data](https://huggingface.co/datasets/llm-wizard/alpaca-gpt4-data) as the fine-tuning dataset. diff --git a/docs/mindformers/docs/source_zh_cn/example/distilled/distilled.md b/docs/mindformers/docs/source_zh_cn/example/distilled/distilled.md index 83af77a099..69abf4dfcd 100644 --- a/docs/mindformers/docs/source_zh_cn/example/distilled/distilled.md +++ b/docs/mindformers/docs/source_zh_cn/example/distilled/distilled.md @@ -227,7 +227,7 @@ python toolkit/data_preprocess/huggingface/datasets_preprocess.py \ 最后在`packed_data`中可以找到处理后的数据集,格式为arrow。 -更多数据集处理的教程请参考[MindSpore Transformers官方文档-数据集](https://www.mindspore.cn/mindformers/docs/zh-CN/master/feature/dataset.html#%E8%87%AA%E5%AE%9A%E4%B9%89%E6%95%B0%E6%8D%AEhandler)。 +更多数据集处理的教程请参考[MindSpore Transformers官方文档-数据集](https://www.mindspore.cn/mindformers/docs/zh-CN/master/feature/dataset.html#%E8%87%AA%E5%AE%9A%E4%B9%89%E6%95%B0%E6%8D%AE%E5%A4%84%E7%90%86%E5%8A%9F%E8%83%BD)。 ##### 选项 2: 使用完成转换的数据 diff --git a/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md b/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md index 807d54477f..45ac025cb3 100644 --- a/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md +++ b/docs/mindformers/docs/source_zh_cn/guide/supervised_fine_tuning.md @@ -18,7 +18,7 @@ MindSpore Transformers支持全参微调和LoRA高效微调两种SFT微调方式 ### 2. 数据集准备 -MindSpore Transformers微调阶段当前已支持[Hugging Face格式](https://www.mindspore.cn/mindformers/docs/zh-CN/master/feature/dataset.html#huggingface%E6%95%B0%E6%8D%AE%E9%9B%86)以及[MindRecord格式](https://www.mindspore.cn/mindformers/docs/zh-CN/master/feature/dataset.html#mindrecord%E6%95%B0%E6%8D%AE%E9%9B%86)的数据集。用户可根据任务需求完成数据准备。 +MindSpore Transformers微调阶段当前已支持[Hugging Face格式](https://www.mindspore.cn/mindformers/docs/zh-CN/master/feature/dataset.html#hugging-face%E6%95%B0%E6%8D%AE%E9%9B%86)以及[MindRecord格式](https://www.mindspore.cn/mindformers/docs/zh-CN/master/feature/dataset.html#mindrecord%E6%95%B0%E6%8D%AE%E9%9B%86)的数据集。用户可根据任务需求完成数据准备。 ### 3. 配置文件准备 @@ -52,7 +52,7 @@ MindSpore Transformers提供加载Hugging Face模型权重的能力,支持直 ### 数据集准备 -MindSpore Transformers提供在线加载Hugging Face数据集的能力,详细信息可以参考[MindSpore Transformers-数据集-Hugging Face数据集](https://www.mindspore.cn/mindformers/docs/zh-CN/master/feature/dataset.html#huggingface%E6%95%B0%E6%8D%AE%E9%9B%86)。 +MindSpore Transformers提供在线加载Hugging Face数据集的能力,详细信息可以参考[MindSpore Transformers-数据集-Hugging Face数据集](https://www.mindspore.cn/mindformers/docs/zh-CN/master/feature/dataset.html#hugging-face%E6%95%B0%E6%8D%AE%E9%9B%86)。 本实践流程以[llm-wizard/alpaca-gpt4-data](https://huggingface.co/datasets/llm-wizard/alpaca-gpt4-data)作为微调数据集为例。 diff --git a/tutorials/source_en/beginner/introduction.md b/tutorials/source_en/beginner/introduction.md index 06a15e52ef..fba9f740f8 100644 --- a/tutorials/source_en/beginner/introduction.md +++ b/tutorials/source_en/beginner/introduction.md @@ -36,7 +36,7 @@ The functions of each module are described as follows: - **Ascend Application Enablement**: AI platform or service capabilities provided by Huawei major product lines based on MindSpore. - **MindSpore**: Support for device-edge-cloud-independent and collaborative unified training and inference frameworks. -- **CANN**: A driver layer that enables Ascend chips ([learn more](https://www.hiascend.com/en/software/cann)). +- **CANN**: A driver layer that enables Ascend chips. - **Compute Resources**: Ascend serialized IP, chips and servers. For details, click [Huawei Ascend official website](https://e.huawei.com/en/products/servers/ascend). -- Gitee