From 68d1707606bcdf29874934f8b2e702232942bfa9 Mon Sep 17 00:00:00 2001 From: xuanyue Date: Mon, 21 Jul 2025 20:58:35 +0800 Subject: [PATCH] =?UTF-8?q?=E8=BD=AC=E6=8D=A2=E5=B7=A5=E5=85=B7=E6=8E=A5?= =?UTF-8?q?=E5=8F=A3=E8=A1=A5=E5=85=85=E6=96=87=E6=A1=A3=E8=AF=B4=E6=98=8E?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- docs/lite/docs/source_en/mindir/converter_tool.md | 1 + docs/lite/docs/source_zh_cn/mindir/converter_tool.md | 1 + 2 files changed, 2 insertions(+) diff --git a/docs/lite/docs/source_en/mindir/converter_tool.md b/docs/lite/docs/source_en/mindir/converter_tool.md index 795367b3a6..840790acbe 100644 --- a/docs/lite/docs/source_en/mindir/converter_tool.md +++ b/docs/lite/docs/source_en/mindir/converter_tool.md @@ -76,6 +76,7 @@ Detailed parameter descriptions are provided below. | `--inputDataType=` | Not | Set the data type of the quantized model input tensor. Only if the quantization parameters (scale and zero point) of the model input tensor are available. The default is to keep the same data type as the original model input tensor. | FLOAT32, INT8, UINT8, DEFAULT | DEFAULT | Not supported at the moment | | `--outputDataType=` | Not | Set the data type of the quantized model output tensor. Only if the quantization parameters (scale and zero point) of the model output tensor are available. The default is to keep the same data type as the original model output tensor. | FLOAT32, INT8, UINT8, DEFAULT | DEFAULT | Not supported at the moment | | `--device=` | Not | Set target device when converter model. The use case is when on the Ascend device, if you need to the converted model to have the ability to use Ascend backend to perform inference, you can set the parameter. If it is not set, the converted model will use CPU backend to perform inference by default. | This option will be deprecated. It is replaced by setting `optimize` option to `ascend_oriented` | +| `--optimizeTransformer=` | Not | Set whether to do transformer-fursion or not. | true, false | false | only support tensorrt | Notes: diff --git a/docs/lite/docs/source_zh_cn/mindir/converter_tool.md b/docs/lite/docs/source_zh_cn/mindir/converter_tool.md index ddebe202f5..0188afdf57 100644 --- a/docs/lite/docs/source_zh_cn/mindir/converter_tool.md +++ b/docs/lite/docs/source_zh_cn/mindir/converter_tool.md @@ -76,6 +76,7 @@ MindSpore Lite云侧推理模型转换工具提供了多种参数设置,用户 | `--inputDataType=` | 否 | 设定量化模型输入tensor的data type。仅当模型输入tensor的量化参数(scale和zero point)齐备时有效。默认与原始模型输入tensor的data type保持一致。 | FLOAT32、INT8、UINT8、DEFAULT | DEFAULT | 暂不支持 | | `--outputDataType=` | 否 | 设定量化模型输出tensor的data type。仅当模型输出tensor的量化参数(scale和zero point)齐备时有效。默认与原始模型输出tensor的data type保持一致。 | FLOAT32、INT8、UINT8、DEFAULT | DEFAULT | 暂不支持 | | `--device=` | 否 | 设置转换模型时的目标设备。使用场景是在Ascend设备上,如果你需要转换生成的模型调用Ascend后端执行推理,则设置该参数,若未设置,默认模型调用CPU后端推理。 | Ascend、Ascend310、Ascend310P | - | 该选项即将废弃,使用optimize配置ascend_oriented替代 | +| `--optimizeTransformer=` | 否 | 是否使能transformer融合。 | true、false | false | 仅支持tensorrt | 注意事项: -- Gitee