diff --git a/docs/note/source_en/image_classification_lite.md b/docs/note/source_en/image_classification_lite.md
new file mode 100644
index 0000000000000000000000000000000000000000..4f478abb7251ffc3f7d5de22f7f308afd3eb3ef1
--- /dev/null
+++ b/docs/note/source_en/image_classification_lite.md
@@ -0,0 +1,33 @@
+# Image classification
+
+
+
+## Image classification introduction
+
+Image classification is to identity what an image represents, to predict the object list and the probabilites. For example,the following tabel shows the classification results after mode inference.
+
+
+
+| Category | Probability |
+| ---------- | ----------- |
+| plant | 0.9359 |
+| flower | 0.8641 |
+| tree | 0.8584 |
+| houseplant | 0.7867 |
+
+Using MindSpore Lite to realize image classification [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_classification).
+
+## Image classification model list
+
+The following table shows the data of some image classification models using MindSpore Lite inference.
+
+> The performance of the table below is tested on the mate30.
+
+| model name | link | size | precision | CPU 4 thread delay |
+|-----------------------|----------|----------|----------|-----------|
+| MobileNetV2 | | | | |
+| LeNet | | | | |
+| AlexNet | | | | |
+| GoogleNet | | | | |
+| ResNext50 | | | | |
+
diff --git a/docs/note/source_en/images/image_classification_result.png b/docs/note/source_en/images/image_classification_result.png
new file mode 100644
index 0000000000000000000000000000000000000000..a7cc49f582440e31b6b5b14dbba5131bfed2a4b4
Binary files /dev/null and b/docs/note/source_en/images/image_classification_result.png differ
diff --git a/docs/note/source_en/images/object_detection.png b/docs/note/source_en/images/object_detection.png
new file mode 100644
index 0000000000000000000000000000000000000000..ad5425c86393a9367701166796df42c9e4702988
Binary files /dev/null and b/docs/note/source_en/images/object_detection.png differ
diff --git a/docs/note/source_en/object_detection_lite.md b/docs/note/source_en/object_detection_lite.md
new file mode 100644
index 0000000000000000000000000000000000000000..fbd7b794f05a08b7e5096bbd78e9791d6cc3958a
--- /dev/null
+++ b/docs/note/source_en/object_detection_lite.md
@@ -0,0 +1,29 @@
+# Object detection
+
+
+
+## Object dectectin introduction
+
+Object detection can identify the object in the image and its position in the image. For the following figure, the output of the object detection model is shown in the following table. The rectangular box is used to identify the position of the object in the graph and the probability of the object category is marked. The four numbers in the coordinates are Xmin, Ymin, Xmax, Ymax; the probability represents the probility of the detected object.
+
+
+
+| Category | Probability | Coordinate |
+| -------- | ----------- | ---------------- |
+| mouse | 0.78 | [10, 25, 35, 43] |
+
+Using MindSpore Lite to implement object detection [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/object_detection).
+
+## Object detection model list
+
+The following table shows the data of some object detection models using MindSpore Lite inference.
+
+> The performance of the table below is tested on the mate30.
+
+| model name | link | size | precision | CPU 4 thread delay |
+|-----------------------|----------|----------|----------|-----------|
+| SSD | | | | |
+| Faster_RCNN | | | | |
+| Yolov3_Darknet | | | | |
+| Mask_RCNN | | | | |
+
diff --git a/docs/note/source_zh_cn/image_classification_lite.md b/docs/note/source_zh_cn/image_classification_lite.md
new file mode 100644
index 0000000000000000000000000000000000000000..147ce0f8512ac53d6958f1df2fb7c97db78d4989
--- /dev/null
+++ b/docs/note/source_zh_cn/image_classification_lite.md
@@ -0,0 +1,33 @@
+# 图像分类
+
+
+
+## 图像分类介绍
+
+图像分类模型可以预测图片中出现哪些物体,识别出图片中出现物体列表及其概率。 比如下图经过模型推理的分类结果为下表:
+
+
+
+| 类别 | 概率 |
+| ---------- | ------ |
+| plant | 0.9359 |
+| flower | 0.8641 |
+| tree | 0.8584 |
+| houseplant | 0.7867 |
+
+使用MindSpore Lite实现图像分类的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_classification)。
+
+## 图像分类模型列表
+
+下表是使用MindSpore Lite推理的部分图像分类模型的数据。
+
+> 下表的性能是在mate30手机上测试的。
+
+| 模型名称 | 模型链接 | 大小 | 精度 | CPU 4线程时延 |
+|-----------------------|----------|----------|----------|-----------|
+| MobileNetV2 | | | | |
+| LeNet | | | | |
+| AlexNet | | | | |
+| GoogleNet | | | | |
+| ResNext50 | | | | |
+
diff --git a/docs/note/source_zh_cn/images/image_classification_result.png b/docs/note/source_zh_cn/images/image_classification_result.png
new file mode 100644
index 0000000000000000000000000000000000000000..a7cc49f582440e31b6b5b14dbba5131bfed2a4b4
Binary files /dev/null and b/docs/note/source_zh_cn/images/image_classification_result.png differ
diff --git a/docs/note/source_zh_cn/images/object_detection.png b/docs/note/source_zh_cn/images/object_detection.png
new file mode 100644
index 0000000000000000000000000000000000000000..ad5425c86393a9367701166796df42c9e4702988
Binary files /dev/null and b/docs/note/source_zh_cn/images/object_detection.png differ
diff --git a/docs/note/source_zh_cn/object_detection_lite.md b/docs/note/source_zh_cn/object_detection_lite.md
new file mode 100644
index 0000000000000000000000000000000000000000..c4f741a6bb334e5870f82f72af6943e7d36569cd
--- /dev/null
+++ b/docs/note/source_zh_cn/object_detection_lite.md
@@ -0,0 +1,29 @@
+# 对象检测
+
+
+
+## 对象检测介绍
+
+对象检测可以识别出图片中的对象和该对象在图片中的位置。 如:对下图使用对象检测模型的输出如下表所示,使用矩形框识别图中对象的位置并且标注出对象类别的概率,其中坐标中的4个数字分别为Xmin,Ymin,,Xmax,,Ymax;概率表示反应被检测物理的可信程度。
+
+
+
+| 类别 | 概率 | 坐标 |
+| ----- | ---- | ---------------- |
+| mouse | 0.78 | [10, 25, 35, 43] |
+
+使用MindSpore Lite实现对象检测的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/object_detection)。
+
+## 对象检测模型列表
+
+下表是使用MindSpore Lite推理的部分对象检测模型的数据。
+
+> 下表的性能是在mate30手机上测试的。
+
+| 模型名称 | 模型链接 | 大小 | 精度 | CPU 4线程时延 |
+|-----------------------|----------|----------|----------|-----------|
+| SSD | | | | |
+| Faster_RCNN | | | | |
+| YoloV3_Darknet53 | | | | |
+| Mask_RCNN | | | | |
+
diff --git a/tutorials/lite/source_zh_cn/quick_start/quick_start.md b/tutorials/lite/source_zh_cn/quick_start/quick_start.md
index ef76d900d3bbb15f9e2680656e356f7e9bf71b2a..4f97e73ff86df37772005bd1f42284dbfe40b4c8 100644
--- a/tutorials/lite/source_zh_cn/quick_start/quick_start.md
+++ b/tutorials/lite/source_zh_cn/quick_start/quick_start.md
@@ -1,154 +1,135 @@
-# 实现一个图像分类应用
+# Implementing an Image Classification Application
-- [实现一个图像分类应用](#实现一个图像分类应用)
- - [概述](#概述)
- - [选择模型](#选择模型)
- - [转换模型](#转换模型)
- - [部署应用](#部署应用)
- - [运行依赖](#运行依赖)
- - [构建与运行](#构建与运行)
- - [示例程序详细说明](#示例程序详细说明)
- - [示例程序结构](#示例程序结构)
- - [配置MindSpore Lite依赖项](#配置mindspore-lite依赖项)
- - [下载及部署模型文件](#下载及部署模型文件)
- - [编写端侧推理代码](#编写端侧推理代码)
+- [Implementing an Image Classification Application](#implementing-an-image-classification-application)
+ - [Overview](#overview)
+ - [Selecting a Model](#selecting-a-model)
+ - [Converting a Model](#converting-a-model)
+ - [Deploying an Application](#deploying-an-application)
+ - [Running Dependencies](#running-dependencies)
+ - [Building and Running](#building-and-running)
+ - [Detailed Description of the Sample Program](#detailed-description-of-the-sample-program)
+ - [Sample Program Structure](#sample-program-structure)
+ - [Configuring MindSpore Lite Dependencies](#configuring-mindspore-lite-dependencies)
+ - [Downloading and Deploying a Model File](#downloading-and-deploying-a-model-file)
+ - [Compiling On-Device Inference Code](#compiling-on-device-inference-code)
-
+
-## 概述
+## Overview
-我们推荐你从端侧Android图像分类demo入手,了解MindSpore Lite应用工程的构建、依赖项配置以及相关API的使用。
+It is recommended that you start from the image classification demo on the Android device to understand how to build the MindSpore Lite application project, configure dependencies, and use related APIs.
-本教程基于MindSpore团队提供的Android“端侧图像分类”示例程序,演示了端侧部署的流程。
-1. 选择图像分类模型。
-2. 将模型转换成MindSpore Lite模型格式。
-3. 在端侧使用MindSpore Lite推理模型。详细说明如何在端侧利用MindSpore Lite C++ API(Android JNI)和MindSpore Lite图像分类模型完成端侧推理,实现对设备摄像头捕获的内容进行分类,并在APP图像预览界面中,显示出最可能的分类结果。
+This tutorial demonstrates the on-device deployment process based on the image classification sample program on the Android device provided by the MindSpore team.
+
+1. Select an image classification model.
+2. Convert the model into a MindSpore Lite model.
+3. Use the MindSpore Lite inference model on the device. The following describes how to use the MindSpore Lite C++ APIs (Android JNIs) and MindSpore Lite image classification models to perform on-device inference, classify the content captured by a device camera, and display the most possible classification result on the application's image preview screen.
-> 你可以在这里找到[Android图像分类模型](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite)和[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_classification)。
+> Click to find [Android image classification models](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite) and [sample code](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_classification).
-## 选择模型
+## Selecting a Model
-MindSpore团队提供了一系列预置终端模型,你可以在应用程序中使用这些预置的终端模型。
-MindSpore Model Zoo中图像分类模型可[在此下载](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms)。
-同时,你也可以使用预置模型做迁移学习,以实现自己的图像分类任务。
+The MindSpore team provides a series of preset device models that you can use in your application.
+Click [here](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms) to download image classification models in MindSpore ModelZoo.
+In addition, you can use the preset model to perform migration learning to implement your image classification tasks.
-## 转换模型
+## Converting a Model
-如果预置模型已经满足你要求,请跳过本章节。 如果你需要对MindSpore提供的模型进行重训,重训完成后,需要将模型导出为[.mindir格式](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#mindir)。然后使用MindSpore Lite[模型转换工具](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/converter_tool.html)将.mindir模型转换成.ms格式。
+After you retrain a model provided by MindSpore, export the model in the [.mindir format](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#export-mindir-model). Use the MindSpore Lite [model conversion tool](https://www.mindspore.cn/lite/tutorial/en/master/use/converter_tool.html) to convert the .mindir model to a .ms model.
-以mobilenetv2模型为例,如下脚本将其转换为MindSpore Lite模型用于端侧推理。
+Take the mobilenetv2 model as an example. Execute the following script to convert a model into a MindSpore Lite model for on-device inference.
```bash
./converter_lite --fmk=MS --modelFile=mobilenetv2.mindir --outputFile=mobilenetv2.ms
```
-## 部署应用
+## Deploying an Application
-接下来介绍如何构建和执行mindspore Lite端侧图像分类任务。
+The following section describes how to build and execute an on-device image classification task on MindSpore Lite.
-### 运行依赖
+### Running Dependencies
-- Android Studio >= 3.2 (推荐4.0以上版本)
-- NDK 21.3
-- CMake 3.10.2
-- Android SDK >= 26
-- OpenCV >= 4.0.0 (本示例代码已包含)
+- Android Studio 3.2 or later (Android 4.0 or later is recommended.)
+- Native development kit (NDK) 21.3
+- [CMake](https://cmake.org/download) 3.10.2
+- Android software development kit (SDK) 26 or later
+- [JDK]( https://www.oracle.com/downloads/otn-pub/java/JDK/) 1.8 or later
-### 构建与运行
+### Building and Running
-1. 在Android Studio中加载本示例源码,并安装相应的SDK(指定SDK版本后,由Android Studio自动安装)。
+1. Load the sample source code to Android Studio and install the corresponding SDK. (After the SDK version is specified, Android Studio automatically installs the SDK.)

- 启动Android Studio后,点击`File->Settings->System Settings->Android SDK`,勾选相应的SDK。如下图所示,勾选后,点击`OK`,Android Studio即可自动安装SDK。
+ Start Android Studio, click `File > Settings > System Settings > Android SDK`, and select the corresponding SDK. As shown in the following figure, select an SDK and click `OK`. Android Studio automatically installs the SDK.

- (可选)若安装时出现NDK版本问题,可手动下载相应的[NDK版本](https://developer.android.com/ndk/downloads?hl=zh-cn)(本示例代码使用的NDK版本为21.3),并在`Project Structure`的`Android NDK location`设置中指定SDK的位置。
+ (Optional) If an NDK version issue occurs during the installation, manually download the corresponding [NDK version](https://developer.android.com/ndk/downloads) (the version used in the sample code is 21.3). Specify the SDK location in `Android NDK location` of `Project Structure`.

-2. 连接Android设备,运行图像分类应用程序。
+2. Connect to an Android device and runs the image classification application.
- 通过USB连接Android设备调试,点击`Run 'app'`即可在你的设备上运行本示例项目。
+ Connect to the Android device through a USB cable for debugging. Click `Run 'app'` to run the sample project on your device.

- Android Studio连接设备调试操作,可参考。
-
-3. 在Android设备上,点击“继续安装”,安装完即可查看到设备摄像头捕获的内容和推理结果。
+ For details about how to connect the Android Studio to a device for debugging, see .
- 
+ The mobile phone needs to be turn on "USB debugging mode" before Android Studio can recognize the mobile phone. Huawei mobile phones generally turn on "USB debugging model" in Settings > system and update > developer Options > USB debugging.
- 识别结果如下图所示。
+3. Continue the installation on the Android device. After the installation is complete, you can view the content captured by a camera and the inference result.

-## 示例程序详细说明
+## Detailed Description of the Sample Program
-本端侧图像分类Android示例程序分为JAVA层和JNI层,其中,JAVA层主要通过Android Camera 2 API实现摄像头获取图像帧,以及相应的图像处理等功能;JNI层在[Runtime](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/runtime.html)中完成模型推理的过程。
+This image classification sample program on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. At the JNI layer, the model inference process is completed in [Runtime](https://www.mindspore.cn/lite/tutorial/en/master/use/runtime.html).
-> 此处详细说明示例程序的JNI层实现,JAVA层运用Android Camera 2 API实现开启设备摄像头以及图像帧处理等功能,需读者具备一定的Android开发基础知识。
+> This following describes the JNI layer implementation of the sample program. At the Java layer, the Android Camera 2 API is used to enable a device camera and process image frames. Readers are expected to have the basic Android development knowledge.
-### 示例程序结构
+### Sample Program Structure
```
app
-|
-├── libs # 存放MindSpore Lite依赖的库文件
-│ └── arm64-v8a
-│ ├── libopencv_java4.so
-│ └── libmindspore-lite.so
│
-├── opencv # opencv 相关依赖文件
-│ └── ...
-|
├── src/main
-│ ├── assets # 资源文件
-| | └── model.ms # 存放模型文件
+│ ├── assets # resource files
+| | └── mobilenetv2.ms # model file
│ |
-│ ├── cpp # 模型加载和预测主要逻辑封装类
-| | ├── ..
-| | ├── MindSporeNetnative.cpp # MindSpore调用相关的JNI方法
-│ | └── MindSporeNetnative.h # 头文件
+│ ├── cpp # main logic encapsulation classes for model loading and prediction
+| | |── ...
+| | ├── mindspore_lite_x.x.x-minddata-arm64-cpu` #MindSpore Lite version
+| | ├── MindSporeNetnative.cpp # JNI methods related to MindSpore calling
+│ | └── MindSporeNetnative.h # header file
│ |
-│ ├── java # java层应用代码
+│ ├── java # application code at the Java layer
│ │ └── com.huawei.himindsporedemo
-│ │ ├── gallery.classify # 图像处理及MindSpore JNI调用相关实现
+│ │ ├── gallery.classify # implementation related to image processing and MindSpore JNI calling
│ │ │ └── ...
-│ │ └── obejctdetect # 开启摄像头及绘制相关实现
+│ │ └── widget # implementation related to camera enabling and drawing
│ │ └── ...
│ │
-│ ├── res # 存放Android相关的资源文件
-│ └── AndroidManifest.xml # Android配置文件
+│ ├── res # resource files related to Android
+│ └── AndroidManifest.xml # Android configuration file
│
-├── CMakeList.txt # cmake编译入口文件
+├── CMakeList.txt # CMake compilation entry file
│
-├── build.gradle # 其他Android配置文件
+├── build.gradle # Other Android configuration file
+├── download.gradle # MindSpore version download
└── ...
```
-### 配置MindSpore Lite依赖项
-
-Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html)生成`libmindspore-lite.so`库文件。
-
-本示例中,bulid过程由download.gradle文件配置自动下载`libmindspore-lite.so`以及OpenCV的`libopencv_java4.so`库文件,并放置在`app/libs/arm64-v8a`目录下。
-
-注: 若自动下载失败,请手动下载相关库文件并将其放在对应位置:
-
-libmindspore-lite.so [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/libmindspore-lite.so)
-
-libmindspore-lite include文件 [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/include.zip)
-
-libopencv_java4.so [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/libopencv_java4.so)
-
-libopencv include文件 [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/include.zip)
+### Configuring MindSpore Lite Dependencies
+When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/tutorial/en/master/build.html) to generate the `libmindspore-lite.so` library file.
+In Android Studio, place the compiled `libmindspore-lite.so` library file (which can contain multiple compatible architectures) in the `app/libs/ARM64-V8a` (Arm64) or `app/libs/armeabi-v7a` (Arm32) directory of the application project. In the `build.gradle` file of the application, configure the compilation support of CMake, `arm64-v8a`, and `armeabi-v7a`.
```
android{
@@ -166,61 +147,82 @@ android{
}
```
-在`app/CMakeLists.txt`文件中建立`.so`库文件链接,如下所示。
+Create a link to the `.so` library file in the `app/CMakeLists.txt` file:
```
-# Set MindSpore Lite Dependencies.
-include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/include/MindSpore)
+# ============== Set MindSpore Dependencies. =============
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/third_party/flatbuffers/include)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION})
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include/ir/dtype)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include/schema)
+
add_library(mindspore-lite SHARED IMPORTED )
-set_target_properties(mindspore-lite PROPERTIES
- IMPORTED_LOCATION "${CMAKE_SOURCE_DIR}/libs/libmindspore-lite.so")
+add_library(minddata-lite SHARED IMPORTED )
-# Set OpenCV Dependecies.
-include_directories(${CMAKE_SOURCE_DIR}/opencv/sdk/native/jni/include)
-add_library(lib-opencv SHARED IMPORTED )
-set_target_properties(lib-opencv PROPERTIES
- IMPORTED_LOCATION "${CMAKE_SOURCE_DIR}/libs/libopencv_java4.so")
+set_target_properties(mindspore-lite PROPERTIES IMPORTED_LOCATION
+ ${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libmindspore-lite.so)
+set_target_properties(minddata-lite PROPERTIES IMPORTED_LOCATION
+ ${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libminddata-lite.so)
+# --------------- MindSpore Lite set End. --------------------
# Link target library.
target_link_libraries(
...
- mindspore-lite
- lib-opencv
+ # --- mindspore ---
+ minddata-lite
+ mindspore-lite
...
)
```
-### 下载及部署模型文件
-从MindSpore Model Hub中下载模型文件,本示例程序中使用的终端图像分类模型文件为`mobilenetv2.ms`,同样通过`download.gradle`脚本在APP构建时自动下载,并放置在`app/src/main/assets`工程目录下。
-注:若下载失败请手工下载模型文件,mobilenetv2.ms [下载链接](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms)
+In this example, the download.gradle File configuration auto download MindSpore Lite version, placed in the '`app / src / main/cpp/mindspore_lite_x.x.x-minddata-arm64-cpu`' directory.
+
+Note: if the automatic download fails, please manually download the relevant library files and put them in the corresponding location.
+
+MindSpore Lite version [MindSpore Lite version]( https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/libmindspore-lite.so)
+
-### 编写端侧推理代码
-在JNI层调用MindSpore Lite C++ API实现端测推理。
-推理代码流程如下,完整代码请参见`src/cpp/MindSporeNetnative.cpp`。
-1. 加载MindSpore Lite模型文件,构建上下文、会话以及用于推理的计算图。
+### Downloading and Deploying a Model File
- - 加载模型文件:创建并配置用于模型推理的上下文
+In this example, the download.gradle File configuration auto download `mobilenetv2.ms `and placed in the 'app / libs / arm64-v8a' directory.
+
+Note: if the automatic download fails, please manually download the relevant library files and put them in the corresponding location.
+
+mobilenetv2.ms [mobilenetv2.ms]( https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms)
+
+
+
+### Compiling On-Device Inference Code
+
+Call MindSpore Lite C++ APIs at the JNI layer to implement on-device inference.
+
+The inference code process is as follows. For details about the complete code, see `src/cpp/MindSporeNetnative.cpp`.
+
+1. Load the MindSpore Lite model file and build the context, session, and computational graph for inference.
+
+ - Load a model file. Create and configure the context for model inference.
```cpp
// Buffer is the model data passed in by the Java layer
jlong bufferLen = env->GetDirectBufferCapacity(buffer);
char *modelBuffer = CreateLocalModelBuffer(env, buffer);
```
- - 创建会话
+ - Create a session.
```cpp
void **labelEnv = new void *;
MSNetWork *labelNet = new MSNetWork;
*labelEnv = labelNet;
// Create context.
- lite::Context *context = new lite::Context;
- context->device_ctx_.type = lite::DT_CPU;
- context->thread_num_ = numThread; //Specify the number of threads to run inference
+ mindspore::lite::Context *context = new mindspore::lite::Context;
+ context->thread_num_ = num_thread;
// Create the mindspore session.
labelNet->CreateSessionMS(modelBuffer, bufferLen, "device label", context);
@@ -228,7 +230,7 @@ target_link_libraries(
```
- - 加载模型文件并构建用于推理的计算图
+ - Load the model file and build a computational graph for inference.
```cpp
void MSNetWork::CreateSessionMS(char* modelBuffer, size_t bufferLen, std::string name, mindspore::lite::Context* ctx)
{
@@ -239,9 +241,9 @@ target_link_libraries(
}
```
-2. 将输入图片转换为传入MindSpore模型的Tensor格式。
+2. Convert the input image into the Tensor format of the MindSpore model.
- 将待检测图片数据转换为输入MindSpore模型的Tensor。
+ Convert the image data to be detected into the Tensor format of the MindSpore model.
```cpp
// Convert the Bitmap image passed in from the JAVA layer to Mat for OpenCV processing
@@ -270,59 +272,60 @@ target_link_libraries(
delete[] (dataHWC);
```
-3. 对输入Tensor按照模型进行推理,获取输出Tensor,并进行后处理。
+3. Perform inference on the input tensor based on the model, obtain the output tensor, and perform post-processing.
- - 图执行,端测推理。
+ - Perform graph execution and on-device inference.
```cpp
// After the model and image tensor data is loaded, run inference.
auto status = mSession->RunGraph();
```
- - 获取输出数据。
+ - Obtain the output data.
```cpp
- auto msOutputs = mSession->GetOutputs();
+ auto names = mSession->GetOutputTensorNames();
+ std::unordered_map msOutputs;
+ for (const auto &name : names) {
+ auto temp_dat =mSession->GetOutputByTensorName(name);
+ msOutputs.insert(std::pair {name, temp_dat});
+ }
std::string retStr = ProcessRunnetResult(msOutputs, ret);
```
- - 输出数据的后续处理。
+ - Perform post-processing of the output data.
```cpp
std::string ProcessRunnetResult(std::unordered_map msOutputs, int runnetRet) {
- // Get model output results.
- std::unordered_map::iterator iter;
- iter = msOutputs.begin();
- auto brach1_string = iter->first;
- auto branch1_tensor = iter->second;
+ std::unordered_map::iterator iter;
+ iter = msOutputs.begin();
- int OUTPUTS_LEN = branch1_tensor->ElementsNum();
+ // The mobilenetv2.ms model output just one branch.
+ auto outputTensor = iter->second;
+ int tensorNum = outputTensor->ElementsNum();
+ MS_PRINT("Number of tensor elements:%d", tensorNum);
- float *temp_scores = static_cast(branch1_tensor->MutableData());
- float scores[RET_CATEGORY_SUM];
- for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- scores[i] = temp_scores[i];
- }
+ // Get a pointer to the first score.
+ float *temp_scores = static_cast(outputTensor->MutableData());
- // Converted to text information that needs to be displayed in the APP.
- std::string retStr = "";
- if (runnetRet == 0) {
- for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- if (scores[i] > 0.3){
- retStr += g_labels_name_map[i];
- retStr += ":";
- std::string score_str = std::to_string(scores[i]);
- retStr += score_str;
- retStr += ";";
- }
- }
- else {
- MS_PRINT("MindSpore run net failed!");
- for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- retStr += " :0.0;";
- }
+ float scores[RET_CATEGORY_SUM];
+ for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
+ if (temp_scores[i] > 0.5) {
+ MS_PRINT("MindSpore scores[%d] : [%f]", i, temp_scores[i]);
}
+ scores[i] = temp_scores[i];
+ }
- return retStr;
+ // Score for each category.
+ // Converted to text information that needs to be displayed in the APP.
+ std::string categoryScore = "";
+ for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
+ categoryScore += labels_name_map[i];
+ categoryScore += ":";
+ std::string score_str = std::to_string(scores[i]);
+ categoryScore += score_str;
+ categoryScore += ";";
+ }
+ return categoryScore;
}
- ```
+ ```
\ No newline at end of file