diff --git a/tutorials/source_en/advanced_use/network_migration.md b/tutorials/source_en/advanced_use/network_migration.md index 78a6aa7460e6697438b9e7bb101bf3fa3ab7d397..131e852ce8525bba2c209deb2e042a5431a29b19 100644 --- a/tutorials/source_en/advanced_use/network_migration.md +++ b/tutorials/source_en/advanced_use/network_migration.md @@ -266,27 +266,7 @@ Run your scripts on ModelArts. For details, see [Using MindSpore on Cloud](https ### Inference Phase -Models trained on the Ascend 910 AI processor can be used for inference on different hardware platforms. - -1. Inference on the Ascend 910 AI processor - - Similar to the `estimator.evaluate()` API of TensorFlow, MindSpore provides the `model.eval()` API for model validation. You only need to import the validation dataset. The processing method of the validation dataset is the same as that of the training dataset. For details about the complete code, see . - - ```python - res = model.eval(dataset) - ``` - -2. Inference on the Ascend 310 AI processor - - 1. Export the ONNX or GEIR model by referring to the [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx). - - 2. For performing inference in the cloud environment, see the [Ascend 910 training and Ascend 310 inference samples](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html). For details about the bare-metal environment (compared with the cloud environment where the Ascend 310 AI processor is deployed locally), see the description document of the Ascend 310 AI processor software package. - -3. Inference on a GPU - - 1. Export the ONNX model by referring to the [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx). - - 2. Perform inference on the NVIDIA GPU by referring to [TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt). +Models trained on the Ascend 910 AI processor can be used for inference on different hardware platforms. Refer to the [Multi-platform Inference Tutorial](https://www.mindspore.cn/tutorial/en/master/use/multi_platform_inference.html) for detailed steps. ## Examples diff --git a/tutorials/source_en/index.rst b/tutorials/source_en/index.rst index ee2738003ef3478fe19217189b8ae72463629b4d..8328339b36ebab62b5397cbc734266d3025c568d 100644 --- a/tutorials/source_en/index.rst +++ b/tutorials/source_en/index.rst @@ -19,7 +19,9 @@ MindSpore Tutorials :caption: Use use/data_preparation/data_preparation + use/defining_the_network use/saving_and_loading_model_parameters + use/multi_platform_inference .. toctree:: :glob: diff --git a/tutorials/source_en/use/defining_the_network.rst b/tutorials/source_en/use/defining_the_network.rst new file mode 100644 index 0000000000000000000000000000000000000000..dacc6c83cda764f5c035ee0a09913d9fabd92cb8 --- /dev/null +++ b/tutorials/source_en/use/defining_the_network.rst @@ -0,0 +1,7 @@ +Defining the Network +==================== + +.. toctree:: + :maxdepth: 1 + + Network List \ No newline at end of file diff --git a/tutorials/source_en/use/multi_platform_inference.md b/tutorials/source_en/use/multi_platform_inference.md new file mode 100644 index 0000000000000000000000000000000000000000..eb311cca1b51faefe29be5c556c3dd01c97bd7d7 --- /dev/null +++ b/tutorials/source_en/use/multi_platform_inference.md @@ -0,0 +1,41 @@ +# Multi-platform Inference + + + +- [Multi-platform Inference](#multi-platform-inference) + - [Overview](#overview) + - [On-Device Inference](#on-device-inference) + + + + + +## Overview + +Models based on MindSpore training can be used for inference on different hardware platforms. This document introduces the inference process on each platform. + +1. Inference on the Ascend 910 AI processor + + MindSpore provides the `model.eval()` API for model validation. You only need to import the validation dataset. The processing method of the validation dataset is the same as that of the training dataset. For details about the complete code, see . + + ```python + res = model.eval(dataset) + ``` + + In addition, the` model.predict ()` interface can be used for inference. For detailed usage, please refer to API description. + +2. Inference on the Ascend 310 AI processor + + 1. Export the ONNX or GEIR model by referring to the [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx). + + 2. For performing inference in the cloud environment, see the [Ascend 910 training and Ascend 310 inference samples](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html). For details about the bare-metal environment (compared with the cloud environment where the Ascend 310 AI processor is deployed locally), see the description document of the Ascend 310 AI processor software package. + +3. Inference on a GPU + + 1. Export the ONNX model by referring to the [Export GEIR Model and ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#geironnx). + + 2. Perform inference on the NVIDIA GPU by referring to [TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt). + +## On-Device Inference + +The On-Device Inference is based on the MindSpore Predict. Please refer to [On-Device Inference Tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/on_device_inference.html) for details. diff --git a/tutorials/source_zh_cn/advanced_use/network_migration.md b/tutorials/source_zh_cn/advanced_use/network_migration.md index 742f3888cf78af6295c95b3b2928566a27aefeec..f1d8aa4feb3ad8fa1d308cbc98f9d83292f75375 100644 --- a/tutorials/source_zh_cn/advanced_use/network_migration.md +++ b/tutorials/source_zh_cn/advanced_use/network_migration.md @@ -261,27 +261,7 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差 ### 推理阶段 -在Ascend 910 AI处理器上训练后的模型,支持在不同的硬件平台上执行推理。 - -1. Ascend 910 AI处理器上推理 - - 类似于TensorFlow的`estimator.evaluate()`接口,MindSpore提供了`model.eval()`接口来进行模型验证,你只需传入验证数据集即可,验证数据集的处理方式与训练数据集相同。完整代码请参考。 - - ```python - res = model.eval(dataset) - ``` - -2. Ascend 310 AI处理器上推理 - - 1. 参考[模型导出](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#geironnx)生成ONNX或GEIR模型。 - - 2. 云上环境请参考[Ascend910训练和Ascend310推理的样例](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html)完成推理操作。裸机环境(对比云上环境,即本地有Ascend 310 AI 处理器)请参考Ascend 310 AI处理器配套软件包的说明文档。 - -3. GPU上推理 - - 1. 参考[模型导出](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#geironnx)生成ONNX模型。 - - 2. 参考[TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt),在Nvidia GPU上完成推理操作。 +在Ascend 910 AI处理器上训练后的模型,支持在不同的硬件平台上执行推理。详细步骤可参考[多平台推理教程](https://www.mindspore.cn/tutorial/zh-CN/master/use/multi_platform_inference.html)。 ## 样例参考 @@ -289,4 +269,4 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差 2. [常用数据集读取样例](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html) -3. [预训练模型Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/mindspore/model_zoo) \ No newline at end of file +3. [Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/mindspore/model_zoo) \ No newline at end of file diff --git a/tutorials/source_zh_cn/index.rst b/tutorials/source_zh_cn/index.rst index 69d0866d1367b71953f3e38f5fb0fae59a876d29..7c8769abb9157bab6db2d0b22cae89cffe3d3cc7 100644 --- a/tutorials/source_zh_cn/index.rst +++ b/tutorials/source_zh_cn/index.rst @@ -22,6 +22,7 @@ MindSpore教程 use/data_preparation/data_preparation use/defining_the_network use/saving_and_loading_model_parameters + use/multi_platform_inference .. toctree:: :glob: diff --git a/tutorials/source_zh_cn/use/defining_the_network.rst b/tutorials/source_zh_cn/use/defining_the_network.rst index 4f875363bfa03c4477c87a980e3b04f87e29c1e2..d6d2bfba310f930c14a4d450256705391e596012 100644 --- a/tutorials/source_zh_cn/use/defining_the_network.rst +++ b/tutorials/source_zh_cn/use/defining_the_network.rst @@ -4,4 +4,5 @@ .. toctree:: :maxdepth: 1 + 网络支持 custom_operator \ No newline at end of file diff --git a/tutorials/source_zh_cn/use/multi_platform_inference.md b/tutorials/source_zh_cn/use/multi_platform_inference.md new file mode 100644 index 0000000000000000000000000000000000000000..f1b8f5a219ca32f2edba1c34f5f42a41a01e900f --- /dev/null +++ b/tutorials/source_zh_cn/use/multi_platform_inference.md @@ -0,0 +1,44 @@ +# 多平台推理 + + + +- [多平台推理](#多平台推理) + - [概述](#概述) + - [Ascend 910 AI处理器上推理](#ascend-910-ai处理器上推理) + - [Ascend 310 AI处理器上推理](#ascend-310-ai处理器上推理) + - [GPU上推理](#gpu上推理) + - [端侧推理](#端侧推理) + + + + + +## 概述 + +基于MindSpore训练后的模型,支持在不同的硬件平台上执行推理。本文介绍各平台上的推理流程。 + +## Ascend 910 AI处理器上推理 + +MindSpore提供了`model.eval()`接口来进行模型验证,你只需传入验证数据集即可,验证数据集的处理方式与训练数据集相同。完整代码请参考。 + +```python +res = model.eval(dataset) +``` + +此外,也可以通过`model.predict()`接口来进行推理操作,详细用法可参考API说明。 + +## Ascend 310 AI处理器上推理 + +1. 参考[模型导出](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#geironnx)生成ONNX或GEIR模型。 + +2. 云上环境请参考[Ascend910训练和Ascend310推理的样例](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html)完成推理操作。裸机环境(对比云上环境,即本地有Ascend 310 AI 处理器)请参考Ascend 310 AI处理器配套软件包的说明文档。 + +## GPU上推理 + +1. 参考[模型导出](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#geironnx)生成ONNX模型。 + +2. 参考[TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt),在Nvidia GPU上完成推理操作。 + +## 端侧推理 + +端侧推理需使用MindSpore Predict推理引擎,详细操作请参考[端侧推理教程](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/on_device_inference.html)。 \ No newline at end of file