diff --git a/official/cv/ResNet/README.md b/official/cv/ResNet/README.md index 26c078d21520df8ddc41c52d94f755f5a99e2e4b..e207f981ca4cefc8b2cf078f1382279eda3ab412 100644 --- a/official/cv/ResNet/README.md +++ b/official/cv/ResNet/README.md @@ -761,7 +761,7 @@ If you want to predict by inference backend MindSpore Lite, you can directly set python predict.py --checkpoint_file_path [CKPT_PATH] --config_path [CONFIG_PATH] --img_path [IMG_PATH] --enable_predict_lite_backend True > log.txt 2>&1 & ``` -Or you can predict by using MindSpore Lite Python interface, which is shown as follows, please refer to [Using Python Interface to Perform Cloud-side Inference](https://www.mindspore.cn/lite/docs/en/master/use/cloud_infer/runtime_python.html) for details. +Or you can predict by using MindSpore Lite Python interface, which is shown as follows, please refer to [Using Python Interface to Perform Cloud-side Inference](https://www.mindspore.cn/lite/docs/en/master/mindir/runtime_python.html) for details. ```bash python predict.py --mindir_path [MINDIR_PATH] --config_path [CONFIG_PATH] --img_path [IMG_PATH] --enable_predict_lite_mindir True > log.txt 2>&1 & diff --git a/official/cv/ResNet/README_CN.md b/official/cv/ResNet/README_CN.md index 2d9439b415350c6ced8a9c275455972bb8c5379f..5aafb7f53d0c8794a73313527212d889a4145eff 100644 --- a/official/cv/ResNet/README_CN.md +++ b/official/cv/ResNet/README_CN.md @@ -776,7 +776,7 @@ Prediction avg time: 5.360 ms predict.py --checkpoint_file_path [CKPT_PATH] --config_path [CONFIG_PATH] --img_path [IMG_PATH] --enable_predict_lite_backend True > log.txt 2>&1 & ``` -或者你可以调用MindSpore Lite Python接口进行推理,示例如下,具体细节参考[使用Python接口执行云侧推理](https://www.mindspore.cn/lite/docs/zh-CN/master/use/cloud_infer/runtime_python.html) 。 +或者你可以调用MindSpore Lite Python接口进行推理,示例如下,具体细节参考[使用Python接口执行云侧推理](https://www.mindspore.cn/lite/docs/zh-CN/master/mindir/runtime_python.html) 。 ```bash python predict.py --mindir_path [MINDIR_PATH] --config_path [CONFIG_PATH] --img_path [IMG_PATH] --enable_predict_lite_mindir True > log.txt 2>&1 & diff --git a/official/cv/Unet/README.md b/official/cv/Unet/README.md index 85e68301d205f5dcddeca7a28ee04a894b11a6da..f590c050bb71e280ffaabf8ec3ce5a704e7079c1 100644 --- a/official/cv/Unet/README.md +++ b/official/cv/Unet/README.md @@ -617,7 +617,7 @@ Result on ONNX **Before inference, please refer to [MindSpore Inference with C++ Deployment Guide](https://gitee.com/mindspore/models/blob/master/utils/cpp_infer/README.md) to set environment variables.** If you need to use the trained model to perform inference on multiple hardware platforms, such as Ascend 910 or Ascend 310, you -can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following +can refer to this [Link](https://www.mindspore.cn/docs/en/master/model_infer/index.html). Following the steps below, this is a simple example: ### Continue Training on the Pretrained Model diff --git a/official/cv/Unet/README_CN.md b/official/cv/Unet/README_CN.md index cd3aea7da87547ac103e5e1f6ca67abe87130b92..2c664737b711316839b502d8afcb552047be5664 100644 --- a/official/cv/Unet/README_CN.md +++ b/official/cv/Unet/README_CN.md @@ -607,7 +607,7 @@ bash ./scripts/run_eval_onnx.sh [DATASET_PATH] [ONNX_MODEL] [DEVICE_TARGET] [CON **推理前需参照 [MindSpore C++推理部署指南](https://gitee.com/mindspore/models/blob/master/utils/cpp_infer/README_CN.md) 进行环境变量设置。** -如果您需要使用训练好的模型在Ascend 910、Ascend 310等多个硬件平台上进行推理上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/ms_infer/overview.html)。下面是一个简单的操作步骤示例: +如果您需要使用训练好的模型在Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/index.html)。下面是一个简单的操作步骤示例: ### 继续训练预训练模型 diff --git a/official/cv/VIT/README.md b/official/cv/VIT/README.md index 8632ae8617035de3576b48e2b20d157df517a067..3fe19000a8b13686e2e14016871af156b0b2716b 100644 --- a/official/cv/VIT/README.md +++ b/official/cv/VIT/README.md @@ -449,7 +449,7 @@ in acc.log. ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/en/master/model_infer/index.html). Following the steps below, this is a simple example: - Running on Ascend diff --git a/official/cv/VIT/README_CN.md b/official/cv/VIT/README_CN.md index 120ac25c7f146541559ec2b44e79299a1a6151b3..3e33d3eed87adbb83dd05ddd3b3d195b114a2c6c 100644 --- a/official/cv/VIT/README_CN.md +++ b/official/cv/VIT/README_CN.md @@ -451,7 +451,7 @@ python export.py --config_path=[CONFIG_PATH] ### 推理 -如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/ms_infer/overview.html)。下面是操作步骤示例: +如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/index.html)。下面是操作步骤示例: - Ascend处理器环境运行 diff --git a/official/lite/image_classification/README.en.md b/official/lite/image_classification/README.en.md index 0b4a7b900411865d6f735330e0ddedd0985daf0a..b72434558e254a0f8d2a84a42e86eeaf3d941b9e 100644 --- a/official/lite/image_classification/README.en.md +++ b/official/lite/image_classification/README.en.md @@ -65,7 +65,7 @@ The following describes how to use the MindSpore Lite C++ APIs (Android JNIs) an ## Detailed Description of the Sample Program -This image classification sample program on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. At the JNI layer, the model inference process is completed in [Runtime](https://www.mindspore.cn/lite/docs/en/master/use/runtime.html). +This image classification sample program on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. At the JNI layer, the model inference process is completed in [Runtime](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html). ### Sample Program Structure @@ -101,7 +101,7 @@ app ### Configuring MindSpore Lite Dependencies -When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/docs/en/master/use/build.html) to generate the MindSpore Lite version. In this case, you need to use the compile command of generate with image preprocessing module. +When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/docs/en/master/build/build.html) to generate the MindSpore Lite version. In this case, you need to use the compile command of generate with image preprocessing module. In this example, the build process automatically downloads the `mindspore-lite-1.0.1-runtime-arm64-cpu` by the `app/download.gradle` file and saves in the `app/src/main/cpp` directory. diff --git a/official/lite/image_classification/README.md b/official/lite/image_classification/README.md index 9c8b2c502d7224c29bd566718675da7582bd0f58..093f7e4eda2aa1940d4e6b2dc3d7d42e22b19071 100644 --- a/official/lite/image_classification/README.md +++ b/official/lite/image_classification/README.md @@ -108,7 +108,7 @@ app ### 配置MindSpore Lite依赖项 -Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html)生成`mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`库文件包并解压缩(包含`libmindspore-lite.so`库文件和相关头文件),在本例中需使用生成带图像预处理模块的编译命令。 +Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/master/build/build.html)生成`mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`库文件包并解压缩(包含`libmindspore-lite.so`库文件和相关头文件),在本例中需使用生成带图像预处理模块的编译命令。 > version:输出件版本号,与所编译的分支代码对应的版本一致。 > diff --git a/official/lite/image_segmentation/README.en.md b/official/lite/image_segmentation/README.en.md index f3866584f8ee8d8a134c9b1f2db3c4acf25f643f..bb7de7da6edeb1098ba772c4444c896fde2361ef 100644 --- a/official/lite/image_segmentation/README.en.md +++ b/official/lite/image_segmentation/README.en.md @@ -67,7 +67,7 @@ The following describes how to use the MindSpore Lite JAVA APIs and MindSpore Li ## Detailed Description of the Sample Program -This image segmentation sample program on the Android device is implemented through Java. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. Then Java API is called to infer.[Runtime](https://www.mindspore.cn/lite/docs/en/master/use/runtime.html). +This image segmentation sample program on the Android device is implemented through Java. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. Then Java API is called to infer.[Runtime](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html). ### Sample Program Structure @@ -100,7 +100,7 @@ app ### Configuring MindSpore Lite Dependencies -When MindSpore Java APIs are called, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/docs/en/master/use/build.html) to generate the MindSpore Lite version. In this case, you need to use the compile command of generate with image preprocessing module. +When MindSpore Java APIs are called, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/docs/en/master/build/build.html) to generate the MindSpore Lite version. In this case, you need to use the compile command of generate with image preprocessing module. In this example, the build process automatically downloads the `mindspore-lite-1.0.1-runtime-arm64-cpu` by the `app/download.gradle` file and saves in the `app/src/main/cpp` directory. diff --git a/official/lite/image_segmentation/README.md b/official/lite/image_segmentation/README.md index 995ab16c220deb0b2c1548a2ce8708546910308e..a1a36740c748fd830010bdfb961976dbb3d87c1d 100644 --- a/official/lite/image_segmentation/README.md +++ b/official/lite/image_segmentation/README.md @@ -78,7 +78,7 @@ app ### 配置MindSpore Lite依赖项 -Android调用MindSpore Android AAR时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html)生成`mindspore-lite-maven-{version}.zip`库文件包并解压缩(包含`mindspore-lite-{version}.aar`库文件)。 +Android调用MindSpore Android AAR时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/master/build/build.html)生成`mindspore-lite-maven-{version}.zip`库文件包并解压缩(包含`mindspore-lite-{version}.aar`库文件)。 > version:输出件版本号,与所编译的分支代码对应的版本一致。 > diff --git a/official/lite/object_detection/README.en.md b/official/lite/object_detection/README.en.md index 2db5df083aa00712de40f03168391ae4d4b83e09..e1d00f46055844453018c4fb9b7697cd8c0afc30 100644 --- a/official/lite/object_detection/README.en.md +++ b/official/lite/object_detection/README.en.md @@ -70,7 +70,7 @@ This object detection sample program on the Android device includes a Java layer ### Configuring MindSpore Lite Dependencies -When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/docs/en/master/use/build.html) to generate the MindSpore Lite version. In this case, you need to use the compile command of generate with image preprocessing module. +When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/docs/en/master/build/build.html) to generate the MindSpore Lite version. In this case, you need to use the compile command of generate with image preprocessing module. In this example, the build process automatically downloads the `mindspore-lite-1.0.1-runtime-arm64-cpu` by the `app/download.gradle` file and saves in the `app/src/main/cpp` directory. diff --git a/official/lite/object_detection/README.md b/official/lite/object_detection/README.md index c2e8fa521a05a41675c8b1d31de65d92d261aca4..b1c87749ac926ffa5d0c4c59f4745fc733ea3d22 100644 --- a/official/lite/object_detection/README.md +++ b/official/lite/object_detection/README.md @@ -69,7 +69,7 @@ ## 示例程序详细说明 -本端侧目标检测Android示例程序分为JAVA层和JNI层,其中,JAVA层主要通过Android Camera 2 API实现摄像头获取图像帧,以及相应的图像处理(针对推理结果画框)等功能;JNI层在[Runtime](https://www.mindspore.cn/lite/docs/zh-CN/master/use/runtime.html)中完成模型推理的过程。 +本端侧目标检测Android示例程序分为JAVA层和JNI层,其中,JAVA层主要通过Android Camera 2 API实现摄像头获取图像帧,以及相应的图像处理(针对推理结果画框)等功能;JNI层在[Runtime](https://www.mindspore.cn/lite/docs/zh-CN/master/infer/runtime_cpp.html)中完成模型推理的过程。 > 此处详细说明示例程序的JNI层实现,JAVA层运用Android Camera 2 API实现开启设备摄像头以及图像帧处理等功能,需读者具备一定的Android开发基础知识。 @@ -110,7 +110,7 @@ app ### 配置MindSpore Lite依赖项 -Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html)生成`mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`库文件包并解压缩(包含`libmindspore-lite.so`库文件和相关头文件),在本例中需使用生成带图像预处理模块的编译命令。 +Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/master/build/build.html)生成`mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`库文件包并解压缩(包含`libmindspore-lite.so`库文件和相关头文件),在本例中需使用生成带图像预处理模块的编译命令。 > version:输出件版本号,与所编译的分支代码对应的版本一致。 > diff --git a/official/lite/posenet/README.en.md b/official/lite/posenet/README.en.md index 46fd8847e5d6bbf15f830565b27f1c4096233290..463a3cd6c6fa835ca389afe34cbad73f85a5a0a7 100644 --- a/official/lite/posenet/README.en.md +++ b/official/lite/posenet/README.en.md @@ -66,7 +66,7 @@ This sample application demonstrates how to use the MindSpore Lite API and skele ## Detailed Description of the Sample Application -The skeleton detection sample application on the Android device uses the Android Camera 2 API to enable a camera to obtain image frames and process images, as well as using [runtime](https://www.mindspore.cn/lite/docs/en/master/use/runtime.html) to complete model inference. +The skeleton detection sample application on the Android device uses the Android Camera 2 API to enable a camera to obtain image frames and process images, as well as using [runtime](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html) to complete model inference. ### Sample Application Structure diff --git a/official/lite/posenet/README.md b/official/lite/posenet/README.md index 45a6db6e14b2dfdc5ff1f757154a8ac22d43479c..296d190d77fcedaeb833e9751c1424f4e577104b 100644 --- a/official/lite/posenet/README.md +++ b/official/lite/posenet/README.md @@ -69,7 +69,7 @@ ## 示例程序详细说明 -骨骼检测Android示例程序通过Android Camera 2 API实现摄像头获取图像帧,以及相应的图像处理等功能,在[Runtime](https://www.mindspore.cn/lite/docs/zh-CN/master/use/runtime.html)中完成模型推理的过程。 +骨骼检测Android示例程序通过Android Camera 2 API实现摄像头获取图像帧,以及相应的图像处理等功能,在[Runtime](https://www.mindspore.cn/lite/docs/zh-CN/master/infer/runtime_cpp.html)中完成模型推理的过程。 ### 示例程序结构 diff --git a/official/lite/scene_detection/README.en.md b/official/lite/scene_detection/README.en.md index 3ed3a2d6e37c09d1bdca31a0b8cfd2decd0171ce..f5d6085cf3e970e7eb86956e332da01a599b9891 100644 --- a/official/lite/scene_detection/README.en.md +++ b/official/lite/scene_detection/README.en.md @@ -66,7 +66,7 @@ This sample application demonstrates how to use the MindSpore Lite C++ API (Andr ## Detailed Description of the Sample Application -The scene detection sample application on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images (drawing frames based on the inference result). At the JNI layer, the model inference process is completed in [runtime](https://www.mindspore.cn/lite/docs/en/master/use/runtime.html). +The scene detection sample application on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images (drawing frames based on the inference result). At the JNI layer, the model inference process is completed in [runtime](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html). > This following describes the JNI layer implementation of the sample application. At the Java layer, the Android Camera 2 API is used to enable a device camera and process image frames. Readers are expected to have the basic Android development knowledge. @@ -107,7 +107,7 @@ app ### Configuring MindSpore Lite Dependencies -When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can refer to [Building MindSpore Lite](https://www.mindspore.cn/lite/docs/en/master/use/build.html) to generate the `mindspore-lite-{version}-minddata-{os}-{device}.tar.gz` library file package (including the `libmindspore-lite.so` library file and related header files) and decompress it. The following example uses the build command with the image preprocessing module. +When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can refer to [Building MindSpore Lite](https://www.mindspore.cn/lite/docs/en/master/build/build.html) to generate the `mindspore-lite-{version}-minddata-{os}-{device}.tar.gz` library file package (including the `libmindspore-lite.so` library file and related header files) and decompress it. The following example uses the build command with the image preprocessing module. > version: version number in the output file, which is the same as the version number of the built branch code. > diff --git a/official/lite/scene_detection/README.md b/official/lite/scene_detection/README.md index 6e883141ee941d7da67228fd841633bf63c030aa..d68d89a2b324db62824b1cf5ea96329aea0a837d 100644 --- a/official/lite/scene_detection/README.md +++ b/official/lite/scene_detection/README.md @@ -106,7 +106,7 @@ app ### 配置MindSpore Lite依赖项 -Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html)生成`mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`库文件包并解压缩(包含`libmindspore-lite.so`库文件和相关头文件),在本例中需使用生成带图像预处理模块的编译命令。 +Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/master/build/build.html)生成`mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`库文件包并解压缩(包含`libmindspore-lite.so`库文件和相关头文件),在本例中需使用生成带图像预处理模块的编译命令。 > version:输出件版本号,与所编译的分支代码对应的版本一致。 > diff --git a/official/lite/style_transfer/README.en.md b/official/lite/style_transfer/README.en.md index 6e16b4ab5c9b2f01827e03c96afb7e9cc38fc654..93d7deacd99a927c6e8259aa779627ac3fdfd290 100644 --- a/official/lite/style_transfer/README.en.md +++ b/official/lite/style_transfer/README.en.md @@ -66,7 +66,7 @@ This sample application demonstrates how to use the MindSpore Lite API and MindS ## Detailed Description of the Sample Application -The style transfer sample application on the Android device uses the Android Camera 2 API to enable a camera to obtain image frames and process images, as well as using [runtime](https://www.mindspore.cn/lite/docs/en/master/use/runtime.html) to complete model inference. +The style transfer sample application on the Android device uses the Android Camera 2 API to enable a camera to obtain image frames and process images, as well as using [runtime](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html) to complete model inference. ### Sample Application Structure diff --git a/official/lite/style_transfer/README.md b/official/lite/style_transfer/README.md index 4b5bae542aa0f935b40acd967ab8d20f838cc360..e006c0d991394495f67959882cb0fae80155d13a 100644 --- a/official/lite/style_transfer/README.md +++ b/official/lite/style_transfer/README.md @@ -69,7 +69,7 @@ ## 示例程序详细说明 -风格Android示例程序通过Android Camera 2 API实现摄像头获取图像帧,以及相应的图像处理等功能,在[Runtime](https://www.mindspore.cn/lite/docs/zh-CN/master/use/runtime.html)中完成模型推理的过程。 +风格Android示例程序通过Android Camera 2 API实现摄像头获取图像帧,以及相应的图像处理等功能,在[Runtime](https://www.mindspore.cn/lite/docs/zh-CN/master/infer/runtime_cpp.html)中完成模型推理的过程。 ### 示例程序结构 diff --git a/research/cv/AlignedReID++/README_CN.md b/research/cv/AlignedReID++/README_CN.md index 559a94c110e06e60cad19b6920fa0c88547dd862..532d291ece48fdefdd6535a4e9db577fdfa292bf 100644 --- a/research/cv/AlignedReID++/README_CN.md +++ b/research/cv/AlignedReID++/README_CN.md @@ -405,7 +405,7 @@ market1501上评估AlignedReID++ ### 推理 -如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/ms_infer/overview.html)。下面是操作步骤示例: +如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/index.html)。下面是操作步骤示例: 在进行推理之前我们需要先导出模型,mindir可以在本地环境上导出。batch_size默认为1。 diff --git a/research/cv/cnnctc/README.md b/research/cv/cnnctc/README.md index da8ec5fa35beee3895faaab7d26fcbb4240fa6ee..789b52833cb40d48615f217d147c994fed21a1dc 100644 --- a/research/cv/cnnctc/README.md +++ b/research/cv/cnnctc/README.md @@ -542,7 +542,7 @@ accuracy: 0.8427 ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/en/master/model_infer/index.html). Following the steps below, this is a simple example: - Running on Ascend diff --git a/research/cv/cnnctc/README_CN.md b/research/cv/cnnctc/README_CN.md index 1454105ddae5e26ea3c9745aa2d0fd65793e6361..1589dda88fa95caebea3dfda173c9fa8e028a16c 100644 --- a/research/cv/cnnctc/README_CN.md +++ b/research/cv/cnnctc/README_CN.md @@ -485,7 +485,7 @@ accuracy: 0.8427 ### 推理 -如果您需要在GPU、Ascend 910、Ascend 310等多个硬件平台上使用训练好的模型进行推理,请参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/ms_infer/overview.html)。以下为简单示例: +如果您需要在GPU、Ascend 910、Ascend 310等多个硬件平台上使用训练好的模型进行推理,请参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/index.html)。以下为简单示例: - Ascend处理器环境运行 diff --git a/research/cv/dlinknet/README.md b/research/cv/dlinknet/README.md index f892c97c423e5c52ae5e6b7889faf16d895cfeef..47c9969cc1b3c7dab1ac2b23cd95f540f9f35ec0 100644 --- a/research/cv/dlinknet/README.md +++ b/research/cv/dlinknet/README.md @@ -328,7 +328,7 @@ bash scripts/run_distribute_gpu_train.sh [DATASET] [CONFIG_PATH] [DEVICE_NUM] [C #### inference If you need to use the trained model to perform inference on multiple hardware platforms, such as Ascend 910 or Ascend 310, you -can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following +can refer to this [Link](https://www.mindspore.cn/docs/en/master/model_infer/index.html). Following the steps below, this is a simple example: ##### running-on-ascend-310 diff --git a/research/cv/dlinknet/README_CN.md b/research/cv/dlinknet/README_CN.md index 86a13f42ca56e48e18c9945a5d2ad00e680ac8fb..3911a9e0d16df4bd8efdd40f10ae1f4d0cd01daf 100644 --- a/research/cv/dlinknet/README_CN.md +++ b/research/cv/dlinknet/README_CN.md @@ -333,7 +333,7 @@ bash scripts/run_distribute_gpu_train.sh [DATASET] [CONFIG_PATH] [DEVICE_NUM] [C #### 推理 -如果您需要使用训练好的模型在Ascend 910、Ascend 310等多个硬件平台上进行推理上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/ms_infer/overview.html)。下面是一个简单的操作步骤示例: +如果您需要使用训练好的模型在Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/index.html)。下面是一个简单的操作步骤示例: ##### Ascend 310环境运行 diff --git a/research/cv/googlenet/README.md b/research/cv/googlenet/README.md index f9e80b295123ecdf1fe49e240af7f256f7ff8d5f..de03081294f110c6c9acda23522f80327a55390f 100644 --- a/research/cv/googlenet/README.md +++ b/research/cv/googlenet/README.md @@ -597,7 +597,7 @@ Current batch_ Size can only be set to 1. ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/en/master/model_infer/index.html). Following the steps below, this is a simple example: - Running on Ascend diff --git a/research/cv/googlenet/README_CN.md b/research/cv/googlenet/README_CN.md index 883104b4f5284c073f554845c5bd6a2d96f9a44f..3c294fce24c3ebe732ead0ad5cec7fc701c66aa2 100644 --- a/research/cv/googlenet/README_CN.md +++ b/research/cv/googlenet/README_CN.md @@ -598,7 +598,7 @@ python export.py --config_path [CONFIG_PATH] ### 推理 -如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/ms_infer/overview.html)。下面是操作步骤示例: +如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/index.html)。下面是操作步骤示例: - Ascend处理器环境运行 diff --git a/research/cv/hardnet/README_CN.md b/research/cv/hardnet/README_CN.md index 44f397ae05a802c12178446059551b2680088b78..6eb181d0458d065595b4d00c51ac58ece1396b11 100644 --- a/research/cv/hardnet/README_CN.md +++ b/research/cv/hardnet/README_CN.md @@ -449,7 +449,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATASET_PATH] [DEVICE_ID] ### 推理 -如果您需要使用此训练模型在Ascend 910上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/ms_infer/overview.html)。下面是操作步骤示例: +如果您需要使用此训练模型在Ascend 910上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/index.html)。下面是操作步骤示例: - Ascend处理器环境运行 @@ -486,7 +486,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATASET_PATH] [DEVICE_ID] print("==============Acc: {} ==============".format(acc)) ``` -如果您需要使用此训练模型在GPU上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/ms_infer/overview.html)。下面是操作步骤示例: +如果您需要使用此训练模型在GPU上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/index.html)。下面是操作步骤示例: - GPU处理器环境运行 diff --git a/research/cv/lenet/README.md b/research/cv/lenet/README.md index a410a6a7a219520c7cf4a6848ebf09cac177093a..84f1994deddac4b4390ad1624444cb60f69e2f89 100644 --- a/research/cv/lenet/README.md +++ b/research/cv/lenet/README.md @@ -286,7 +286,7 @@ If you want to predict by inference backend MindSpore Lite, you can directly set python predict.py --ckpt_file [CKPT_PATH] --data_path [TEST_DATA_PATH] --enable_predict_lite_backend True > log.txt 2>&1 & ``` -Or you can predict by using MindSpore Lite Python interface, which is shown as follows, please refer to [Using Python Interface to Perform Cloud-side Inference](https://www.mindspore.cn/lite/docs/en/master/use/cloud_infer/runtime_python.html) for details. +Or you can predict by using MindSpore Lite Python interface, which is shown as follows, please refer to [Using Python Interface to Perform Cloud-side Inference](https://www.mindspore.cn/lite/docs/en/master/mindir/runtime_python.html) for details. ```bash python predict.py --mindir_path [MINDIR_PATH] --data_path [TEST_DATA_PATH] --enable_predict_lite_mindir True > log.txt 2>&1 & diff --git a/research/cv/lenet/README_CN.md b/research/cv/lenet/README_CN.md index e3c5c5e8e1245b5bbe4c8fafc2c53da2ff233b9e..ab5dc0a79b87709981c470ae7b886d24c144a507 100644 --- a/research/cv/lenet/README_CN.md +++ b/research/cv/lenet/README_CN.md @@ -286,7 +286,7 @@ Prediction avg time: 0.4899 ms python predict.py --ckpt_file [CKPT_PATH] --data_path [TEST_DATA_PATH] --enable_predict_lite_backend True > log.txt 2>&1 & ``` -或者你可以调用MindSpore Lite Python接口进行推理,示例如下,具体细节参考[使用Python接口执行云侧推理](https://www.mindspore.cn/lite/docs/zh-CN/master/use/cloud_infer/runtime_python.html) 。 +或者你可以调用MindSpore Lite Python接口进行推理,示例如下,具体细节参考[使用Python接口执行云侧推理](https://www.mindspore.cn/lite/docs/zh-CN/master/mindir/runtime_python.html) 。 ```bash python predict.py --mindir_path [MINDIR_PATH] --data_path [TEST_DATA_PATH] --enable_predict_lite_mindir True > log.txt 2>&1 & diff --git a/research/cv/sphereface/README.md b/research/cv/sphereface/README.md index 423fe778347c2417ff8a55e9595f6e8f9fa22c7d..1c22df06585df2c71bdc8f8d795957c575b8c13c 100644 --- a/research/cv/sphereface/README.md +++ b/research/cv/sphereface/README.md @@ -474,7 +474,7 @@ The accuracy of evaluating DenseNet121 on the test dataset of ImageNet will be a ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/en/master/model_infer/index.html). Following the steps below, this is a simple example: - Running on Ascend and GPU diff --git a/research/cv/sphereface/README_CN.md b/research/cv/sphereface/README_CN.md index 8a24fde6ee56a16c668ba2d1fd43e7c3e89a742e..3c1a8622e287c67ee3425478f8bfcf0b3e60386f 100644 --- a/research/cv/sphereface/README_CN.md +++ b/research/cv/sphereface/README_CN.md @@ -476,7 +476,7 @@ sphereface网络使用LFW推理得到的结果如下: ### 推理 -如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/ms_infer/overview.html)。下面是操作步骤示例: +如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/index.html)。下面是操作步骤示例: - Ascend、GPU处理器环境运行 diff --git a/research/cv/squeezenet/README.md b/research/cv/squeezenet/README.md index 8de6a7f208afb9c4c0495288bd559c842b83a1b4..19f13910e76308d5003b868c646134242194553b 100644 --- a/research/cv/squeezenet/README.md +++ b/research/cv/squeezenet/README.md @@ -720,7 +720,7 @@ Inference result is saved in current path, you can find result like this in acc. ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/en/master/model_infer/ms_infer/overview.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/en/master/model_infer/index.html). Following the steps below, this is a simple example: - Running on Ascend diff --git a/research/cv/squeezenet1_1/README.md b/research/cv/squeezenet1_1/README.md index ff006ebbfbb4c2047bd8ad57d0700e378c06e235..44a0111a5cfe72d79753ccdb1fb1a61ecdcbeb47 100644 --- a/research/cv/squeezenet1_1/README.md +++ b/research/cv/squeezenet1_1/README.md @@ -306,7 +306,7 @@ Inference result is saved in current path, you can find result like this in acc. ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/en/master/model_infer/ms_infer/overview.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/en/master/model_infer/index.html). Following the steps below, this is a simple example: - Running on Ascend diff --git a/research/nlp/mass/README.md b/research/nlp/mass/README.md index cb5cddaacaf2daa51c8f6dcad74ec1501de18e50..979c9bcd4fcd820b0afd71a1eb09cc178b1cb347 100644 --- a/research/nlp/mass/README.md +++ b/research/nlp/mass/README.md @@ -181,7 +181,7 @@ The data preparation of a natural language processing task contains data cleanin In our experiments, using [Byte Pair Encoding(BPE)](https://arxiv.org/abs/1508.07909) could reduce size of vocabulary, and relieve the OOV influence effectively. Vocabulary could be created using `src/utils/dictionary.py` with text dictionary which is learnt from BPE. -For more detail about BPE, please refer to [Subword-nmt lib](https://www.cnpython.com/pypi/subword-nmt) or [paper](https://arxiv.org/abs/1508.07909). +For more detail about BPE, please refer to [paper](https://arxiv.org/abs/1508.07909). In our experiments, vocabulary was learned based on 1.9M sentences from News Crawl Dataset, size of vocabulary is 45755. @@ -501,7 +501,7 @@ subword-nmt rouge ``` - + # Get started @@ -563,7 +563,7 @@ Get the log and output files under the path `./train_mass_*/`, and the model fil ## Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/en/master/model_infer/ms_infer/overview.html). +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/en/master/model_infer/index.html). For inference, config the options in `default_config.yaml` firstly: - Assign the `default_config.yaml` under `data_path` node to the dataset path. diff --git a/research/nlp/mass/README_CN.md b/research/nlp/mass/README_CN.md index f0053f7673ee0e72f590befb3771f8a1b349c54e..0fd2da5458f20773c9ee830a36fa33c871474137 100644 --- a/research/nlp/mass/README_CN.md +++ b/research/nlp/mass/README_CN.md @@ -183,7 +183,7 @@ MASS脚本及代码结构如下: 实验中,使用[字节对编码(BPE)](https://arxiv.org/abs/1508.07909)可以有效减少词汇量,减轻对OOV的影响。 使用`src/utils/dictionary.py`可以基于BPE学习到的文本词典创建词汇表。 -有关BPE的更多详细信息,参见[Subword-nmt lib](https://www.cnpython.com/pypi/subword-nmt)或[论文](https://arxiv.org/abs/1508.07909)。 +有关BPE的更多详细信息,参见[论文](https://arxiv.org/abs/1508.07909)。 实验中,根据News Crawl数据集的1.9万个句子,学习到的词汇量为45755个单词。 @@ -505,7 +505,7 @@ subword-nmt rouge ``` - + # 快速上手 @@ -567,7 +567,7 @@ bash run_gpu.sh -t t -n 1 -i 1 ## 推理 -如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/ms_infer/overview.html)。 +如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理,可参考此[链接](https://www.mindspore.cn/docs/zh-CN/master/model_infer/index.html)。 推理时,请先配置`config.json`中的选项: - 将`default_config.yaml`节点下的`data_path`配置为数据集路径。 diff --git a/research/recommend/ncf/README.md b/research/recommend/ncf/README.md index 2bb4556b4077a33ff3e970a622e259d90eb4ca64..078d04083977643f61c64a291e360c1e3cbbfa38 100644 --- a/research/recommend/ncf/README.md +++ b/research/recommend/ncf/README.md @@ -356,9 +356,9 @@ Inference result is saved in current path, you can find result like this in acc. ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/en/master/model_infer/ms_infer/overview.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/en/master/model_infer/index.html). Following the steps below, this is a simple example: - + ```python # Load unseen dataset for inference