From d9fd1fbd61109b78486c529eab3f83acf82cc3fa Mon Sep 17 00:00:00 2001 From: Yule100 Date: Mon, 8 Dec 2025 14:21:45 +0800 Subject: [PATCH] =?UTF-8?q?bugfix=20=E4=BF=AE=E6=AD=A3=E6=8E=A8=E7=90=86do?= =?UTF-8?q?=5Fsample=E9=85=8D=E7=BD=AE=E8=AF=B4=E6=98=8E?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- docs/mindformers/docs/source_en/feature/start_tasks.md | 2 +- docs/mindformers/docs/source_zh_cn/feature/start_tasks.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/mindformers/docs/source_en/feature/start_tasks.md b/docs/mindformers/docs/source_en/feature/start_tasks.md index fab3b044dd..4707a642e7 100644 --- a/docs/mindformers/docs/source_en/feature/start_tasks.md +++ b/docs/mindformers/docs/source_en/feature/start_tasks.md @@ -64,7 +64,7 @@ In the root directory of the MindSpore Transformers code, execute the `run_mindf | `--modal_type` | Modal type of input data for predict. This parameter has been deprecated and will be removed in the next version. | str, optional | predict | | `--adapter_id` | LoRA ID for predict. This parameter has been deprecated and will be removed in the next version. | str, optional | predict | | `--predict_batch_size` | The batch size for multi-batch inference. | int, optional | predict | -| `--do_sample` | Whether to use random sampling when selecting tokens for inference. | int, optional, ``True`` means using sampling encoding, ``False`` means using greedy decoding. | predict | +| `--do_sample` | Whether to use random sampling when selecting tokens for inference. | bool, optional, ``True`` means using sampling encoding, ``False`` means using greedy decoding. | predict | ## Distributed Task Pull-up Script diff --git a/docs/mindformers/docs/source_zh_cn/feature/start_tasks.md b/docs/mindformers/docs/source_zh_cn/feature/start_tasks.md index 96e642a229..22b1a7a88b 100644 --- a/docs/mindformers/docs/source_zh_cn/feature/start_tasks.md +++ b/docs/mindformers/docs/source_zh_cn/feature/start_tasks.md @@ -64,7 +64,7 @@ MindSpore Transformers提供了一键启动脚本`run_mindformer.py`和分布式 | `--modal_type` | 模型推理输入对应模态。该参数已废弃,下个版本删除。 | str,可选 | 推理 | | `--adapter_id` | 推理的LoRA ID。该参数已废弃,下个版本删除。 | str,可选 | 推理 | | `--predict_batch_size` | 多batch推理的batch_size大小。 | int,可选 | 推理 | -| `--do_sample` | 推理选择token时是否使用随机采样。 | int,可选,``True`` 表示使用随机采样,``False`` 代表使用贪心搜索。 | 推理 | +| `--do_sample` | 推理选择token时是否使用随机采样。 | bool,可选,``True`` 表示使用随机采样,``False`` 代表使用贪心搜索。 | 推理 | ## 分布式任务拉起脚本 -- Gitee