diff --git a/docs/lite/docs/source_zh_cn/infer/runtime_java.md b/docs/lite/docs/source_zh_cn/infer/runtime_java.md index 8d7547dc79509dfd848920918eaada2383bd12b5..beb279cd62449e4a8962845736bbddf6503c6160 100644 --- a/docs/lite/docs/source_zh_cn/infer/runtime_java.md +++ b/docs/lite/docs/source_zh_cn/infer/runtime_java.md @@ -42,7 +42,7 @@ Android项目中使用MindSpore Lite,可以选择采用[C++ API](https://www.m 采用`Gradle`作为构建工具时,首先将`mindspore-lite-{version}.aar`文件移动到目标module的`libs`目录,然后在目标module的`build.gradle`的`repositories`中添加本地引用目录,最后在`dependencies`中添加AAR的依赖,具体如下所示。 -> 注意mindspore-lite-{version}是AAR的文件名,需要将{version}替换成对应版本信息。 +> mindspore-lite-{version}是AAR的文件名,需要将{version}替换成对应版本信息。 ```groovy repositories { diff --git a/docs/lite/docs/source_zh_cn/train/train_lenet.md b/docs/lite/docs/source_zh_cn/train/train_lenet.md index 084004c2df0be9961d65306310faa8fd82e8c2ed..4c7d1ca44dc9563cd73f247695b813d0deb838ad 100644 --- a/docs/lite/docs/source_zh_cn/train/train_lenet.md +++ b/docs/lite/docs/source_zh_cn/train/train_lenet.md @@ -2,7 +2,7 @@ [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/docs/source_zh_cn/train/train_lenet.md) -> 注意:MindSpore已经统一端边云推理API,如您想继续使用MindSpore Lite独立API进行端侧训练,可以参考[此文档](https://www.mindspore.cn/lite/docs/zh-CN/r1.3/quick_start/train_lenet.html)。 +> MindSpore已经统一端边云推理API,如您想继续使用MindSpore Lite独立API进行端侧训练,可以参考[此文档](https://www.mindspore.cn/lite/docs/zh-CN/r1.3/quick_start/train_lenet.html)。 ## 概述 diff --git a/docs/mindformers/docs/source_en/function/distributed_parallel.md b/docs/mindformers/docs/source_en/function/distributed_parallel.md index ec76fd1ab85c6c3f18ce2ba3551edd6cf7a3c72f..9e78501aaea9457812c59bc763568199ce2b111f 100644 --- a/docs/mindformers/docs/source_en/function/distributed_parallel.md +++ b/docs/mindformers/docs/source_en/function/distributed_parallel.md @@ -166,6 +166,6 @@ In the [Llama3-70B fine-tuning configuration](https://gitee.com/kong_de_shu/mind - **Multi-copy parallelism**: Sequential scheduling algorithm is used to control the parallelism of fine-grained multi-branch operations (`fine_grain_interleave: 2`), improving the overlap of computing and communications. - **Optimizer parallelism**: The calculation of optimizers is distributed to multiple devices to reduce memory usage (`enable_parallel_optimizer: True`). -> Note: Sequential parallelism must be turned on at the same time that fine-grained multicopy parallelism is turned on. +> Sequential parallelism must be turned on at the same time that fine-grained multicopy parallelism is turned on. With the preceding configurations, the distributed training on Llama3-70B can effectively utilize hardware resources in a multi-node multi-device environment to implement efficient and stable model training. diff --git a/docs/mindformers/docs/source_zh_cn/example/distilled/distilled.md b/docs/mindformers/docs/source_zh_cn/example/distilled/distilled.md index 13ed805a90f6f1719c305d4404e8b0ebffad0fc8..ab7dd62624aec6c080169a07bee08d1a9c1a09ce 100644 --- a/docs/mindformers/docs/source_zh_cn/example/distilled/distilled.md +++ b/docs/mindformers/docs/source_zh_cn/example/distilled/distilled.md @@ -317,4 +317,4 @@ bash scripts/msrun_launcher.sh "run_mindformer.py --config distilled/finetune_qw | OpenR1-Qwen-7B (MindSpore Transformers) | 90.0 | | OpenThinker-7B | 89.6 | -> 注:上表第三行为本案例实验结果,该结果由本地实测得到。 +> 上表第三行为本案例实验结果,该结果由本地实测得到。 diff --git a/docs/mindformers/docs/source_zh_cn/function/distributed_parallel.md b/docs/mindformers/docs/source_zh_cn/function/distributed_parallel.md index f3d679f1364eff25a820011be344c05a918b7704..513788070434b78b3f02d3094090dfc0974b7231 100644 --- a/docs/mindformers/docs/source_zh_cn/function/distributed_parallel.md +++ b/docs/mindformers/docs/source_zh_cn/function/distributed_parallel.md @@ -166,6 +166,6 @@ parallel_config: - **多副本并行**:通过执行序调度算法控制细粒度多分支的并行(`fine_grain_interleave: 2`),提高计算与通信的相互掩盖。 - **优化器并行**:优化器计算分散到多个设备上,以减少内存占用(`enable_parallel_optimizer: True`)。 -> 注意:开启细粒度多副本并行的同时必须开启序列并行。 +> 开启细粒度多副本并行的同时必须开启序列并行。 通过以上配置,Llama3-70B的分布式训练在多机多卡环境中可以有效利用硬件资源,实现高效、稳定的模型训练。 diff --git a/docs/mindformers/docs/source_zh_cn/usage/evaluation.md b/docs/mindformers/docs/source_zh_cn/usage/evaluation.md index 84e8edd5ec801550e3af17439dbda49d2a831ef7..70aeeb07fdd04aac4503ee834f57b3c99c00964e 100644 --- a/docs/mindformers/docs/source_zh_cn/usage/evaluation.md +++ b/docs/mindformers/docs/source_zh_cn/usage/evaluation.md @@ -475,7 +475,7 @@ source toolkit/benchmarks/run_vlmevalkit.sh \ 下载[Video-Bench中的答案数据](https://huggingface.co/spaces/LanguageBind/Video-Bench/resolve/main/file/ANSWER.json)。 -> 注:Video-Bench中的文本数据按照“egs/VideoBench/Eval_QA”(目录至少两层,且最后一层是`Eval_QA`)的路径格式进行存储;Video-Bench中的视频数据按照“egs/VideoBench/Eval_video”(目录至少两层,且最后一层是`Eval_video`)的路径格式进行存储。 +> Video-Bench中的文本数据按照“egs/VideoBench/Eval_QA”(目录至少两层,且最后一层是`Eval_QA`)的路径格式进行存储;Video-Bench中的视频数据按照“egs/VideoBench/Eval_video”(目录至少两层,且最后一层是`Eval_video`)的路径格式进行存储。 ### 评测 diff --git a/docs/mindpandas/docs/source_en/mindpandas_configuration.md b/docs/mindpandas/docs/source_en/mindpandas_configuration.md index 41fcf3b27d5bb0f8bc0f5fb69286be1ee48e4452..8c4dddb19857afc2debcee82c3eb5577bf9ee59c 100644 --- a/docs/mindpandas/docs/source_en/mindpandas_configuration.md +++ b/docs/mindpandas/docs/source_en/mindpandas_configuration.md @@ -70,7 +70,7 @@ df_mean = df.mean() When MindSpore Pandas is installed, the built-in distributed compute engine has also been installed synchronously, which can be accessed using the command `yrctl` in the console. -> Note: In multi-process mode, please make sure that the cluster you start is only for your personal use. Using a cluster together with others may lead to potential security risks. +> In multi-process mode, please make sure that the cluster you start is only for your personal use. Using a cluster together with others may lead to potential security risks. ```shell $ yrctl @@ -117,7 +117,7 @@ Succeeded to start! After the cluster is deployed, you need to set a multi-process backend to run in the Python script. The method is to call the `set_concurrency_mode` interface, set the `mode` to `"multiprocess"`. -> Note: We recommend calling `set_concurrency_mode` immediately after `import mindpandas` to set the concurrency mode. Switching the parallel mode while the script is running may cause the program failure. +> We recommend calling `set_concurrency_mode` immediately after `import mindpandas` to set the concurrency mode. Switching the parallel mode while the script is running may cause the program failure. ```python import mindpandas as pd diff --git a/docs/mindpandas/docs/source_zh_cn/mindpandas_configuration.md b/docs/mindpandas/docs/source_zh_cn/mindpandas_configuration.md index fec21fb55481c6f049a809aa5b7cbd664679c84a..df1880b32f15571e3b86ac945f4ada31600ab38e 100644 --- a/docs/mindpandas/docs/source_zh_cn/mindpandas_configuration.md +++ b/docs/mindpandas/docs/source_zh_cn/mindpandas_configuration.md @@ -70,7 +70,7 @@ df_mean = df.mean() 安装MindSpore Pandas时,内置的分布式计算引擎也已经同步安装完成,可以在控制台使用指令`yrctl`访问。 -> 注意:多进程模式下请确保您启动的集群仅由您个人使用,与他人共同使用一个集群可能导致潜在的安全风险。 +> 多进程模式下请确保您启动的集群仅由您个人使用,与他人共同使用一个集群可能导致潜在的安全风险。 ```shell $ yrctl @@ -117,7 +117,7 @@ Succeeded to start! 集群部署完成后,在Python脚本中需要设置使用多进程后端运行。方法是调用`set_concurrency_mode`接口,设置`mode`为`"multiprocess"`。 -> 注意:我们建议在`import mindpandas`之后马上调用`set_concurrency_mode`进行并行模式的设置。在脚本运行过程中切换并行模式将可能导致程序出错。 +> 我们建议在`import mindpandas`之后马上调用`set_concurrency_mode`进行并行模式的设置。在脚本运行过程中切换并行模式将可能导致程序出错。 ```python import mindpandas as pd diff --git a/docs/vllm_mindspore/docs/source_zh_cn/getting_started/tutorials/deepseek_multiNode/deepseek_r1_671b_w8a8_tp16_multi_node.md b/docs/vllm_mindspore/docs/source_zh_cn/getting_started/tutorials/deepseek_multiNode/deepseek_r1_671b_w8a8_tp16_multi_node.md index 22a74c910e2d043f8b4ca8214b3edd904ac35189..af563414a8a993bc6ba58f27887779c7999a34ca 100644 --- a/docs/vllm_mindspore/docs/source_zh_cn/getting_started/tutorials/deepseek_multiNode/deepseek_r1_671b_w8a8_tp16_multi_node.md +++ b/docs/vllm_mindspore/docs/source_zh_cn/getting_started/tutorials/deepseek_multiNode/deepseek_r1_671b_w8a8_tp16_multi_node.md @@ -86,7 +86,7 @@ git clone https://modelers.cn/MindSpore-Lab/DeepSeek-R1-W8A8.git 分别在主从节点配置如下环境变量: -> 注:环境变量必须设置在 Ray 创建集群前,且当环境有变更时,需要通过 `ray stop` 将主从节点集群停止,并重新创建集群,否则环境变量将不生效。 +> 环境变量必须设置在 Ray 创建集群前,且当环境有变更时,需要通过 `ray stop` 将主从节点集群停止,并重新创建集群,否则环境变量将不生效。 ```bash source /usr/local/Ascend/ascend-toolkit/set_env.sh diff --git a/tools/notebook_lint/README_CN.md b/tools/notebook_lint/README_CN.md index 9eb6b5e107606e5c047f4ebd4ac27c900f37209f..9792c753ffc6af3ed19f644af3dbd9c5aaaab593 100644 --- a/tools/notebook_lint/README_CN.md +++ b/tools/notebook_lint/README_CN.md @@ -10,7 +10,7 @@ - `Notebook_Markdownlint`: 以`markdownlint`作为检测工具的对象,其执行检测的方法为`check`。 - `PrintInfo`:传入各个检测对象的检测结果,并将检测信息过滤后打印出来。 -> 注意,检测结果的输出值为`list`格式,其中元素值格式为`(文件名, 报错单元, 报错单元行, 报错码, 报错信息)` +> 检测结果的输出值为`list`格式,其中元素值格式为`(文件名, 报错单元, 报错单元行, 报错码, 报错信息)` ## 环境准备 diff --git a/tutorials/source_zh_cn/cv/vit.ipynb b/tutorials/source_zh_cn/cv/vit.ipynb index a4f4c42ac18dabca4c7565c2804aa658865f0211..5434e1b25a674721c1fa38f4f6132fe5e7753449 100644 --- a/tutorials/source_zh_cn/cv/vit.ipynb +++ b/tutorials/source_zh_cn/cv/vit.ipynb @@ -47,7 +47,7 @@ "\n", "下面将通过代码实例来详细解释基于ViT实现ImageNet分类任务。\n", "\n", - "> 注意,本教程在CPU上运行时间过长,不建议使用CPU运行。" + "> 本教程在CPU上运行时间过长,不建议使用CPU运行。" ] }, {