diff --git a/docs/migration_guide/source_en/inference.md b/docs/migration_guide/source_en/inference.md index 11953d8037d2f8069973455f7232a9255b7dae04..a532cb175be0b9a7e08bd40df32ac36045d06bd6 100644 --- a/docs/migration_guide/source_en/inference.md +++ b/docs/migration_guide/source_en/inference.md @@ -32,9 +32,12 @@ For dominating the difference between backend models, model files in the [MindIR - For the GPU hardware platform, please refer to [Inference on a GPU](https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference_gpu.html). - For the CPU hardware platform, please refer to [Inference on a CPU](https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference_cpu.html). - For inference on the Lite platform on device, please refer to [on-device inference](https://www.mindspore.cn/tutorial/lite/en/master/index.html). + +> Explaination > Please refer to [MindSpore C++ Library Use](https://www.mindspore.cn/doc/faq/en/master/inference.html#c) to solve the interface issues on the Ascend hardware platform. + ## On-line Inference Service Deployment Based on MindSpore Serving MindSpore Serving is a lite and high-performance service module, aiming at assisting MindSpore developers in efficiently deploying on-line inference services. When a user completes the training task by using MindSpore, the trained model can be exported for inference service deployment through MindSpore Serving. Please refer to the following examples for deployment: @@ -45,4 +48,7 @@ MindSpore Serving is a lite and high-performance service module, aiming at assis - [Servable Provided Through Model Configuration](https://www.mindspore.cn/tutorial/inference/en/master/serving_model.html). - [MindSpore Serving-based Distributed Inference Service Deployment](https://www.mindspore.cn/tutorial/inference/en/master/serving_distributed_example.html). + +> Explaination > For deployment issues regarding the on-line inference service, please refer to [MindSpore Serving](https://www.mindspore.cn/doc/faq/en/master/inference.html#mindspore-serving). +