From 8ba9f5490390a1bcc608e84906da69f60efa6450 Mon Sep 17 00:00:00 2001 From: hzb <1219326125@qq.com> Date: Tue, 1 Jun 2021 09:29:10 +0800 Subject: [PATCH 1/2] Modified inference.md --- docs/migration_guide/source_en/inference.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/docs/migration_guide/source_en/inference.md b/docs/migration_guide/source_en/inference.md index 11953d8037..efc29dd655 100644 --- a/docs/migration_guide/source_en/inference.md +++ b/docs/migration_guide/source_en/inference.md @@ -32,9 +32,11 @@ For dominating the difference between backend models, model files in the [MindIR - For the GPU hardware platform, please refer to [Inference on a GPU](https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference_gpu.html). - For the CPU hardware platform, please refer to [Inference on a CPU](https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference_cpu.html). - For inference on the Lite platform on device, please refer to [on-device inference](https://www.mindspore.cn/tutorial/lite/en/master/index.html). - + +> > Please refer to [MindSpore C++ Library Use](https://www.mindspore.cn/doc/faq/en/master/inference.html#c) to solve the interface issues on the Ascend hardware platform. + ## On-line Inference Service Deployment Based on MindSpore Serving MindSpore Serving is a lite and high-performance service module, aiming at assisting MindSpore developers in efficiently deploying on-line inference services. When a user completes the training task by using MindSpore, the trained model can be exported for inference service deployment through MindSpore Serving. Please refer to the following examples for deployment: @@ -45,4 +47,6 @@ MindSpore Serving is a lite and high-performance service module, aiming at assis - [Servable Provided Through Model Configuration](https://www.mindspore.cn/tutorial/inference/en/master/serving_model.html). - [MindSpore Serving-based Distributed Inference Service Deployment](https://www.mindspore.cn/tutorial/inference/en/master/serving_distributed_example.html). +> > For deployment issues regarding the on-line inference service, please refer to [MindSpore Serving](https://www.mindspore.cn/doc/faq/en/master/inference.html#mindspore-serving). + -- Gitee From bb65f29247e7d207f9d1093a87c8557e97585079 Mon Sep 17 00:00:00 2001 From: hzb <1219326125@qq.com> Date: Tue, 1 Jun 2021 09:29:10 +0800 Subject: [PATCH 2/2] Modified inference.md --- docs/migration_guide/source_en/inference.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/docs/migration_guide/source_en/inference.md b/docs/migration_guide/source_en/inference.md index efc29dd655..a532cb175b 100644 --- a/docs/migration_guide/source_en/inference.md +++ b/docs/migration_guide/source_en/inference.md @@ -33,7 +33,8 @@ For dominating the difference between backend models, model files in the [MindIR - For the CPU hardware platform, please refer to [Inference on a CPU](https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference_cpu.html). - For inference on the Lite platform on device, please refer to [on-device inference](https://www.mindspore.cn/tutorial/lite/en/master/index.html). -> + +> Explaination > Please refer to [MindSpore C++ Library Use](https://www.mindspore.cn/doc/faq/en/master/inference.html#c) to solve the interface issues on the Ascend hardware platform. @@ -47,6 +48,7 @@ MindSpore Serving is a lite and high-performance service module, aiming at assis - [Servable Provided Through Model Configuration](https://www.mindspore.cn/tutorial/inference/en/master/serving_model.html). - [MindSpore Serving-based Distributed Inference Service Deployment](https://www.mindspore.cn/tutorial/inference/en/master/serving_distributed_example.html). -> + +> Explaination > For deployment issues regarding the on-line inference service, please refer to [MindSpore Serving](https://www.mindspore.cn/doc/faq/en/master/inference.html#mindspore-serving). -- Gitee