From 184aa1d856e42c1ee8198f2a78602f682b9f0075 Mon Sep 17 00:00:00 2001 From: qian-dan <756328797@qq.com> Date: Wed, 20 Aug 2025 10:30:16 +0800 Subject: [PATCH] modify mindspore lite Signed-off-by: qian-dan <756328797@qq.com> --- docs/lite/docs/source_en/advanced/micro.md | 10 ++++----- .../third_party/converter_register.md | 6 ++--- .../advanced/third_party/npu_info.md | 2 +- .../advanced/third_party/tensorrt_info.md | 2 +- .../source_en/converter/converter_tool.md | 4 +++- .../source_en/infer/image_segmentation.md | 8 +++---- docs/lite/docs/source_en/infer/quick_start.md | 10 ++++----- .../docs/source_en/infer/quick_start_c.md | 8 +++---- .../docs/source_en/infer/quick_start_cpp.md | 2 +- docs/lite/docs/source_en/infer/runtime_cpp.md | 4 ++-- .../lite/docs/source_en/infer/runtime_java.md | 4 ++-- docs/lite/docs/source_en/reference/faq.md | 22 +++++++++---------- .../docs/source_en/tools/benchmark_tool.md | 4 ++-- .../source_en/tools/benchmark_train_tool.md | 4 ++-- .../lite/docs/source_en/tools/cropper_tool.md | 2 +- .../docs/source_en/tools/obfuscator_tool.md | 2 +- docs/lite/docs/source_en/tools/visual_tool.md | 2 +- .../docs/source_en/train/converter_train.md | 2 +- .../docs/source_en/train/runtime_train_cpp.md | 4 ++-- .../source_en/train/runtime_train_java.md | 2 +- docs/lite/docs/source_en/train/train_lenet.md | 8 +++---- .../docs/source_en/train/train_lenet_java.md | 8 +++---- docs/lite/docs/source_zh_cn/advanced/micro.md | 10 ++++----- .../third_party/converter_register.md | 8 +++---- .../advanced/third_party/npu_info.md | 2 +- .../advanced/third_party/tensorrt_info.md | 2 +- .../source_zh_cn/converter/converter_tool.md | 4 +++- .../source_zh_cn/infer/image_segmentation.md | 8 +++---- .../docs/source_zh_cn/infer/quick_start.md | 12 +++++----- .../docs/source_zh_cn/infer/quick_start_c.md | 8 +++---- .../source_zh_cn/infer/quick_start_cpp.md | 10 ++++----- .../docs/source_zh_cn/infer/runtime_cpp.md | 4 ++-- .../docs/source_zh_cn/infer/runtime_java.md | 6 ++--- docs/lite/docs/source_zh_cn/reference/faq.md | 22 +++++++++---------- .../docs/source_zh_cn/tools/benchmark_tool.md | 4 ++-- .../tools/benchmark_train_tool.md | 4 ++-- .../docs/source_zh_cn/tools/cropper_tool.md | 2 +- .../source_zh_cn/tools/obfuscator_tool.md | 2 +- .../docs/source_zh_cn/tools/visual_tool.md | 2 +- .../source_zh_cn/train/converter_train.md | 6 ++--- .../source_zh_cn/train/runtime_train_cpp.md | 6 ++--- .../source_zh_cn/train/runtime_train_java.md | 4 ++-- .../docs/source_zh_cn/train/train_lenet.md | 8 +++---- .../source_zh_cn/train/train_lenet_java.md | 8 +++---- 44 files changed, 133 insertions(+), 129 deletions(-) diff --git a/docs/lite/docs/source_en/advanced/micro.md b/docs/lite/docs/source_en/advanced/micro.md index ecd7ac7154..cd0fc726ab 100644 --- a/docs/lite/docs/source_en/advanced/micro.md +++ b/docs/lite/docs/source_en/advanced/micro.md @@ -32,7 +32,7 @@ The following describes how to prepare the environment for using the conversion You can obtain the conversion tool in either of the following ways: - - Download [Release Version](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) from the MindSpore official website. + - Download [Release Version](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) from the MindSpore Lite official website. Download the release package whose OS is Linux-x86_64 and hardware platform is CPU. @@ -491,7 +491,7 @@ For preparing environment section, refer to the [above](#preparing-environment), After generating model inference code, you need to obtain the `Micro` lib on which the generated inference code depends before performing integrated development on the code. The inference code of different platforms depends on the `Micro` lib of the corresponding platform. You need to specify the platform via the micro configuration item `target` based on the platform in use when generating code, and obtain the `Micro` lib of the platform when obtaining the inference package. -You can download the [Release Version](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) of the corresponding platform from the MindSpore official website. +You can download the [Release Version](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) of the corresponding platform from the MindSpore Lite official website. In chapter [Generating Model Inference Code](#generating-model-inference-code), we obtain the model inference code of the Linux platform with the x86_64 architecture. The `Micro` lib on which the code depends is the release package used by the conversion tool. In the release package, the following content depended by the inference code: @@ -619,7 +619,7 @@ mnist # Specified name of generated code root directory The STM32F767 uses the Cortex-M7 architecture. You can obtain the `Micro` lib of the architecture in either of the following ways: -- Download [Release Version](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) from the MindSpore official website. +- Download [Release Version](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) from the MindSpore Lite official website. You need to download the release package whose OS is None and hardware platform is Cortex-M7. @@ -627,7 +627,7 @@ The STM32F767 uses the Cortex-M7 architecture. You can obtain the `Micro` lib of You can run the `MSLITE_MICRO_PLATFORM=cortex-m7 bash build.sh -I x86_64` command to compile the Cortex-M7 release package. -For other Cortex-M architecture platforms that do not provide release packages for download, you can modify MindSpore source code and manually compile the code to obtain the release package by referring to the method of compiling and building from source code. +For other Cortex-M architecture platforms that do not provide release packages for download, you can modify MindSpore Lite source code and manually compile the code to obtain the release package by referring to the method of compiling and building from source code. ### Code Integration and Compilation Deployment on Windows: Integrated Development Through IAR @@ -1225,7 +1225,7 @@ Users can directly refer to the above content. ### Export weights of inference model -MindSpore `Serialization` class provides the `ExportWeightsCollaborateWithMicro` function, and `ExportWeightsCollaborateWithMicro` is as follows. +MindSpore Lite `Serialization` class provides the `ExportWeightsCollaborateWithMicro` function, and `ExportWeightsCollaborateWithMicro` is as follows. ```cpp static Status ExportWeightsCollaborateWithMicro(const Model &model, ModelType model_type, diff --git a/docs/lite/docs/source_en/advanced/third_party/converter_register.md b/docs/lite/docs/source_en/advanced/third_party/converter_register.md index 61cb1b476f..37c5b2790a 100644 --- a/docs/lite/docs/source_en/advanced/third_party/converter_register.md +++ b/docs/lite/docs/source_en/advanced/third_party/converter_register.md @@ -10,9 +10,9 @@ We have designed a set of registration mechanism, which allows users to expand, node-parse extension: The users can define the process to parse a certain node of a model by themselves, which only support ONNX, CAFFE, TF and TFLITE. The related interface is [NodeParser](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_converter_NodeParser.html), [NodeParserRegistry](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_NodeParserRegistry.html). model-parse extension: The users can define the process to parse a model by themselves, which only support ONNX, CAFFE, TF and TFLITE. The related interface is [ModelParser](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_converter_ModelParser.html), [ModelParserRegistry](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_ModelParserRegistry.html). -graph-optimization extension: After parsing a model, a graph structure defined by MindSpore will show up and then, the users can define the process to optimize the parsed graph. The related interfaces are [PassBase](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_PassBase.html), [PassPosition](https://mindspore.cn/lite/api/en/r2.7.0/generate/enum_mindspore_registry_PassPosition-1.html), [PassRegistry](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_PassRegistry.html). +graph-optimization extension: After parsing a model, a graph structure defined by MindSpore Lite will show up and then, the users can define the process to optimize the parsed graph. The related interfaces are [PassBase](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_PassBase.html), [PassPosition](https://mindspore.cn/lite/api/en/r2.7.0/generate/enum_mindspore_registry_PassPosition-1.html), [PassRegistry](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_PassRegistry.html). -> The node-parse extension needs to rely on the flatbuffers, protobuf and the serialization files of third-party frameworks, at the same time, the version of flatbuffers and the protobuf needs to be consistent with that of the released package, the serialized files must be compatible with that used by the released package. Note that the flatbuffers, protobuf and the serialization files are not provided in the released package, users need to compile and generate the serialized files by themselves. The users can obtain the basic information about [flabuffers](https://gitee.com/mindspore/mindspore/blob/v2.7.0/cmake/external_libs/flatbuffers.cmake), [probobuf](https://gitee.com/mindspore/mindspore/blob/v2.7.0/cmake/external_libs/protobuf.cmake), [ONNX prototype file](https://gitee.com/mindspore/mindspore/tree/v2.7.0/third_party/proto/onnx), [CAFFE prototype file](https://gitee.com/mindspore/mindspore/tree/v2.7.0/third_party/proto/caffe), [TF prototype file](https://gitee.com/mindspore/mindspore/tree/v2.7.0/third_party/proto/tensorflow) and [TFLITE prototype file](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/tools/converter/parser/tflite/schema.fbs) from the [MindSpore WareHouse](https://gitee.com/mindspore/mindspore/tree/v2.7.0). +> The node-parse extension needs to rely on the flatbuffers, protobuf and the serialization files of third-party frameworks, at the same time, the version of flatbuffers and the protobuf needs to be consistent with that of the released package, the serialized files must be compatible with that used by the released package. Note that the flatbuffers, protobuf and the serialization files are not provided in the released package, users need to compile and generate the serialized files by themselves. The users can obtain the basic information about [flabuffers](https://gitee.com/mindspore/mindspore-lite/blob/v2.7.0/cmake/external_libs/flatbuffers.cmake), [probobuf](https://gitee.com/mindspore/mindspore-lite/blob/v2.7.0/cmake/external_libs/protobuf.cmake), [ONNX prototype file](https://gitee.com/mindspore/mindspore-lite/tree/v2.7.0/third_party/proto/onnx), [CAFFE prototype file](https://gitee.com/mindspore/mindspore-lite/tree/v2.7.0/third_party/proto/caffe), [TF prototype file](https://gitee.com/mindspore/mindspore-lite/tree/r2.7/third_party/proto/tensorflow) and [TFLITE prototype file](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/tools/converter/parser/tflite/schema.fbs) from the [MindSpore WareHouse](https://gitee.com/mindspore/mindspore-lite/tree/v2.7.0). > > MindSpore Lite alse providers a series of registration macros to facilitate user access. These macros include node-parse registration [REG_NODE_PARSER](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/define_node_parser_registry.h_REG_NODE_PARSER-1.html), model-parse registration [REG_MODEL_PARSER](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/define_model_parser_registry.h_REG_MODEL_PARSER-1.html), graph-optimization registration [REG_PASS](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/define_pass_registry.h_REG_PASS-1.html) and graph-optimization scheduled registration [REG_SCHEDULED_PASS](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/define_pass_registry.h_REG_SCHEDULED_PASS-1.html) @@ -94,7 +94,7 @@ For the sample code, please refer to [pass](https://gitee.com/mindspore/mindspor The release package of MindSpore Lite doesn't provide serialized files of other frameworks, therefore, users need to compile and obtain by yourselves. Here, please refer to [Overview](https://www.mindspore.cn/lite/docs/en/r2.7.0/advanced/third_party/converter_register.html#overview). - The case is a tflite model, users need to compile [flatbuffers](https://gitee.com/mindspore/mindspore/blob/v2.7.0/cmake/external_libs/flatbuffers.cmake) and combine the [TFLITE Proto File](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/tools/converter/parser/tflite/schema.fbs) to generate the serialized file. + The case is a tflite model, users need to compile [flatbuffers](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/cmake/external_libs/flatbuffers.cmake) and combine the [TFLITE Proto File](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/tools/converter/parser/tflite/schema.fbs) to generate the serialized file. After generating, users need to create a directory `schema` under the directory of `mindspore-lite/examples/converter_extend` and then place the serialized file in it. diff --git a/docs/lite/docs/source_en/advanced/third_party/npu_info.md b/docs/lite/docs/source_en/advanced/third_party/npu_info.md index 48829794a8..2c51e20186 100644 --- a/docs/lite/docs/source_en/advanced/third_party/npu_info.md +++ b/docs/lite/docs/source_en/advanced/third_party/npu_info.md @@ -12,7 +12,7 @@ Download [DDK 100.510.010.010](https://developer.huawei.com/consumer/en/doc/deve ### Build -Under the Linux operating system, one can easily build MindSpore Lite Package integrating NPU interfaces and libraries using build.sh under the root directory of MindSpore [Source Code](https://gitee.com/mindspore/mindspore). The command is as follows. +Under the Linux operating system, one can easily build MindSpore Lite Package integrating NPU interfaces and libraries using build.sh under the root directory of MindSpore [Source Code](https://gitee.com/mindspore/mindspore-lite). The command is as follows. It will build MindSpore Lite's package under the output directory under the MindSpore source code root directory, which contains the NPU's dynamic library, the libmindspore-lite dynamic library, and the test tool Benchmark. ```bash diff --git a/docs/lite/docs/source_en/advanced/third_party/tensorrt_info.md b/docs/lite/docs/source_en/advanced/third_party/tensorrt_info.md index 434bdee4a3..138815ec3b 100644 --- a/docs/lite/docs/source_en/advanced/third_party/tensorrt_info.md +++ b/docs/lite/docs/source_en/advanced/third_party/tensorrt_info.md @@ -14,7 +14,7 @@ Install TensorRT of the corresponding CUDA version, and set the installed direct ### Build -In the Linux environment, use the build.sh script in the root directory of MindSpore [Source Code](https://gitee.com/mindspore/mindspore) to build the MindSpore Lite package integrated with TensorRT. First configure the environment variable `MSLITE_GPU_BACKEND=tensorrt`, and then execute the compilation command as follows. It will build a package for MindSpore Lite in the output directory under the root of the MindSpore source code, containing `libmindspore-lite.so` and the test tool Benchmark. +In the Linux environment, use the build.sh script in the root directory of MindSpore [Source Code](https://gitee.com/mindspore/mindspore-lite) to build the MindSpore Lite package integrated with TensorRT. First configure the environment variable `MSLITE_GPU_BACKEND=tensorrt`, and then execute the compilation command as follows. It will build a package for MindSpore Lite in the output directory under the root of the MindSpore source code, containing `libmindspore-lite.so` and the test tool Benchmark. ```bash bash build.sh -I x86_64 diff --git a/docs/lite/docs/source_en/converter/converter_tool.md b/docs/lite/docs/source_en/converter/converter_tool.md index df39285cdc..c417a99a9e 100644 --- a/docs/lite/docs/source_en/converter/converter_tool.md +++ b/docs/lite/docs/source_en/converter/converter_tool.md @@ -88,7 +88,7 @@ The following describes the parameters in detail. > - The `configFile` configuration files uses the `key=value` mode to define related parameters. For the configuration parameters related to quantization, please refer to [quantization](https://www.mindspore.cn/lite/docs/en/r2.7.0/advanced/quantization.html). For the configuration parameters related to extension, please refer to [Extension Configuration](https://www.mindspore.cn/lite/docs/en/r2.7.0/advanced/third_party/converter_register.html#extension-configuration). > - `--optimize` parameter is used to set the mode of optimization during the offline conversion. If this parameter is set to none, no relevant graph optimization operations will be performed during the offline conversion phase of the model, and the relevant graph optimization operations will be done during the execution of the inference phase. The advantage of this parameter is that the converted model can be deployed directly to any CPU/GPU/Ascend hardware backend since it is not optimized in a specific way, while the disadvantage is that the initialization time of the model increases during inference execution. If this parameter is set to general, general optimization will be performed, such as constant folding and operator fusion (the converted model only supports CPU/GPU hardware backend, not Ascend backend). If this parameter is set to gpu_oriented, the general optimization and extra optimization for GPU hardware will be performed (the converted model only supports GPU hardware backend). If this parameter is set to ascend_oriented, the optimization for Ascend hardware will be performed (the converted model only supports Ascend hardware backend). > - The encryption and decryption function only takes effect when `MSLITE_ENABLE_MODEL_ENCRYPTION=on` is set at [compile](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html) time and only supports Linux x86 platforms, and the key is a string represented by hexadecimal. Users on the Linux platform can use the `xxd` tool to convert the key represented by the bytes to a hexadecimal representation. -It should be noted that the encryption and decryption algorithm has been updated in version 1.7. As a result, the new version of the converter tool does not support the conversion of the encrypted model exported by MindSpore in version 1.6 and earlier. +It should be noted that the encryption and decryption algorithm has been updated in version 1.7. As a result, the new version of the converter tool does not support the conversion of the encrypted model exported by MindSpore Lite in version 1.6 and earlier. > - Parameters `--input_shape` and dynamicDims are stored in the model during conversion. Call model.get_model_info("input_shape") and model.get_model_info("dynamic_dims") to get it when using the model. ### CPU Model Optimization @@ -178,6 +178,8 @@ The following describes how to use the conversion command by using several commo To use the MindSpore Lite model conversion tool, the following environment preparations are required. +- The Windows conversion tool is compiled based on mingw-64 and depends on related dynamic libraries. Therefore, [mingw-w64](https://www.mingw-w64.org/downloads/) must be installed. + - [Compile](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html) or [download](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) model transfer tool. - Add the path of dynamic library required by the conversion tool to the environment variables PATH. diff --git a/docs/lite/docs/source_en/infer/image_segmentation.md b/docs/lite/docs/source_en/infer/image_segmentation.md index 15544bdd97..90f248323c 100644 --- a/docs/lite/docs/source_en/infer/image_segmentation.md +++ b/docs/lite/docs/source_en/infer/image_segmentation.md @@ -6,7 +6,7 @@ It is recommended that you start from the image segmentation demo on the Android device to understand how to build the MindSpore Lite application project, configure dependencies, and use related Java APIs. -This tutorial demonstrates the on-device deployment process based on the image segmentation demo on the Android device provided by the MindSpore team. +This tutorial demonstrates the on-device deployment process based on the image segmentation demo on the Android device provided by the MindSpore Lite team. ## Selecting a Model @@ -101,7 +101,7 @@ app ### Configuring MindSpore Lite Dependencies -Related library files are required for Android to call MindSpore Android AAR. You can use MindSpore Lite [source code](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html) to generate the `mindspore-lite-maven-{version}.zip` library file package (including the `mindspore-lite-{version}.aar` library file) and decompress it. +Related library files are required for Android to call MindSpore Lite Android AAR. You can use MindSpore Lite [source code](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html) to generate the `mindspore-lite-maven-{version}.zip` library file package (including the `mindspore-lite-{version}.aar` library file) and decompress it. > version: version number in the output file, which is the same as the version number of the built branch code. @@ -152,9 +152,9 @@ The inference code and process are as follows. For details about the complete co } ``` -2. Convert the input image into the Tensor format that is input to the MindSpore model. +2. Convert the input image into the Tensor format that is input to the MindSpore Lite model. - Convert the image data to be detected into the Tensor format that is input to the MindSpore model. + Convert the image data to be detected into the Tensor format that is input to the MindSpore Lite model. ```java List inputs = model.getInputs(); diff --git a/docs/lite/docs/source_en/infer/quick_start.md b/docs/lite/docs/source_en/infer/quick_start.md index 1a0467b90a..3816f07d46 100644 --- a/docs/lite/docs/source_en/infer/quick_start.md +++ b/docs/lite/docs/source_en/infer/quick_start.md @@ -6,7 +6,7 @@ It is recommended that you start from the image classification demo on the Android device to understand how to build the MindSpore Lite application project, configure dependencies, and use related APIs. -This tutorial demonstrates the on-device deployment process based on the image classification sample program on the Android device provided by the MindSpore team. +This tutorial demonstrates the on-device deployment process based on the image classification sample program on the Android device provided by the MindSpore Lite team. 1. Select an image classification model. 2. Convert the model into a MindSpore Lite model. @@ -118,7 +118,7 @@ app ### Configuring MindSpore Lite Dependencies -When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html) to generate the `mindspore-lite-{version}-android-{arch}.tar.gz` library package and extract it (contains the `libmindspore-lite.so` library file and related header files). In this case, you need to use the compile command of generate with image preprocessing module. +When MindSpore Lite C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html) to generate the `mindspore-lite-{version}-android-{arch}.tar.gz` library package and extract it (contains the `libmindspore-lite.so` library file and related header files). In this case, you need to use the compile command of generate with image preprocessing module. > version: Version number of the .tar package, which is the same as the version of the compiled branch code. > @@ -269,9 +269,9 @@ The inference process code is as follows. For details about the complete code, s } ``` -2. Convert the input image into the Tensor format of the MindSpore model. +2. Convert the input image into the Tensor format of the MindSpore Lite model. - - Cut the size of the image `srcbitmap` to be detected and convert it to LiteMat format `lite_norm_mat_cut`. The width, height and channel number information are converted into float format data `dataHWC`. Finally, copy the `dataHWC` to the input `inTensor` of MindSpore model. + - Cut the size of the image `srcbitmap` to be detected and convert it to LiteMat format `lite_norm_mat_cut`. The width, height and channel number information are converted into float format data `dataHWC`. Finally, copy the `dataHWC` to the input `inTensor` of MindSpore Lite model. ```cpp void **labelEnv = reinterpret_cast(netEnv); @@ -342,7 +342,7 @@ The inference process code is as follows. For details about the complete code, s auto status = mModel->Predict(msInputs, &outputs); ``` - - Get the tensor output `msOutputs` of MindSpore model. The text information `resultCharData` displayed in the APP is calculated through `msOutputs` and classification array information. + - Get the tensor output `msOutputs` of MindSpore Lite model. The text information `resultCharData` displayed in the APP is calculated through `msOutputs` and classification array information. ```cpp auto names = mModel->GetOutputTensorNames(); diff --git a/docs/lite/docs/source_en/infer/quick_start_c.md b/docs/lite/docs/source_en/infer/quick_start_c.md index fc83e33df8..72ad22bb0b 100644 --- a/docs/lite/docs/source_en/infer/quick_start_c.md +++ b/docs/lite/docs/source_en/infer/quick_start_c.md @@ -70,11 +70,11 @@ Performing inference with MindSpore Lite consists of the following main steps: - Compiling and building - - Library downloading: Please manually download the MindSpore Lite model inference framework [mindspore-lite-{version}-win-x64.zip](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) with CPU as the hardware platform and Windows-x64 as the operating system, after decompression copy all files in the `runtime\lib` directory to the `mindspore\lite\examples\quick_start_clib\` project directory, and the files in the `runtime\include` directory to the `mindspore\lite\examples\quick_start_c\include` project directory. (Note: the `lib` and `include` directories under the project need to be created manually) + - Library downloading: Please manually download the MindSpore Lite model inference framework [mindspore-lite-{version}-win-x64.zip](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) with CPU as the hardware platform and Windows-x64 as the operating system, after decompression copy all files in the `runtime\lib` directory to the `mindspore-lite\examples\quick_start_clib\` project directory, and the files in the `runtime\include` directory to the `mindspore-lite\examples\quick_start_c\include` project directory. (Note: the `lib` and `include` directories under the project need to be created manually) - - Model downloading: Please manually download the relevant model file [mobilenetv2.ms](https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms) and copy it to the `mindspore\ lite\examples\quick_start_c\model` directory. + - Model downloading: Please manually download the relevant model file [mobilenetv2.ms](https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms) and copy it to the `mindspore-lite\examples\quick_start_c\model` directory. - - Compiling: Execute the [build script](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/examples/quick_start_c/build.bat) in the `mindspore\lite\examples\quick_start_c` directory, which can automatically download the relevant files and compile the Demo. + - Compiling: Execute the [build script](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/examples/quick_start_c/build.bat) in the `mindspore-lite\examples\quick_start_c` directory, which can automatically download the relevant files and compile the Demo. ```bash call build.bat @@ -82,7 +82,7 @@ Performing inference with MindSpore Lite consists of the following main steps: - Executing inference - After compiling and building, go to the `mindspore\lite\examples\quick_start_c\build` directory and execute the following command to experience the MobileNetV2 model inference by MindSpore Lite. + After compiling and building, go to the `mindspore-lite\examples\quick_start_c\build` directory and execute the following command to experience the MobileNetV2 model inference by MindSpore Lite. ```bash set PATH=..\lib;%PATH% diff --git a/docs/lite/docs/source_en/infer/quick_start_cpp.md b/docs/lite/docs/source_en/infer/quick_start_cpp.md index 970dded17b..1097f91380 100644 --- a/docs/lite/docs/source_en/infer/quick_start_cpp.md +++ b/docs/lite/docs/source_en/infer/quick_start_cpp.md @@ -2,7 +2,7 @@ [![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/infer/quick_start_cpp.md) -> MindSpore has unified the inference API. If you want to continue to use the MindSpore Lite independent API for inference, you can refer to the [document](https://www.mindspore.cn/lite/docs/en/r1.3/quick_start/quick_start_cpp.html). +> MindSpore Lite has unified the inference API. If you want to continue to use the MindSpore Lite independent API for inference, you can refer to the [document](https://www.mindspore.cn/lite/docs/en/r1.3/quick_start/quick_start_cpp.html). ## Overview diff --git a/docs/lite/docs/source_en/infer/runtime_cpp.md b/docs/lite/docs/source_en/infer/runtime_cpp.md index 344c3c57c1..19b5861d12 100644 --- a/docs/lite/docs/source_en/infer/runtime_cpp.md +++ b/docs/lite/docs/source_en/infer/runtime_cpp.md @@ -2,7 +2,7 @@ [![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/infer/runtime_cpp.md) -> MindSpore has unified the inference API. If you want to continue to use the MindSpore Lite independent API for inference, you can refer to the [document](https://www.mindspore.cn/lite/docs/en/r1.3/use/runtime_cpp.html). +> MindSpore Lite has unified the inference API. If you want to continue to use the MindSpore Lite independent API for inference, you can refer to the [document](https://www.mindspore.cn/lite/docs/en/r1.3/use/runtime_cpp.html). ## Overview @@ -319,7 +319,7 @@ MindSpore Lite provides two methods to obtain the input tensor of a model. // Users need to free input_buf. ``` -> The data layout in the input tensor of the MindSpore Lite model must be `NHWC`. For more information about data pre-processing, see step 2 in [Writing On-Device Inference Code](https://www.mindspore.cn/lite/docs/en/r2.7.0/infer/quick_start.html#writing-on-device-inference-code) in Android Application Development Based on JNI Interface to convert the input image into the Tensor format of the MindSpore model. +> The data layout in the input tensor of the MindSpore Lite model must be `NHWC`. For more information about data pre-processing, see step 2 in [Writing On-Device Inference Code](https://www.mindspore.cn/lite/docs/en/r2.7.0/infer/quick_start.html#writing-on-device-inference-code) in Android Application Development Based on JNI Interface to convert the input image into the Tensor format of the MindSpore Lite model. > > [GetInputs](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#getinputs) and [GetInputByTensorName](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#getinputbytensorname) methods return data that do not need to be released by users. diff --git a/docs/lite/docs/source_en/infer/runtime_java.md b/docs/lite/docs/source_en/infer/runtime_java.md index b8ef71357a..abba3526eb 100644 --- a/docs/lite/docs/source_en/infer/runtime_java.md +++ b/docs/lite/docs/source_en/infer/runtime_java.md @@ -134,9 +134,9 @@ boolean ret = model.build(filePath, ModelType.MT_MINDIR, msContext); ## Inputting Data -MindSpore Lite Java APIs provide the `getInputsByTensorName` and `getInputs` methods to obtain the input tensor. Both the `byte[]` and `ByteBuffer` data types are supported. You can set the data of the input tensor by calling [setData](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/mstensor.html#setdata). +MindSpore Lite Java APIs provide the `getInputByTensorName` and `getInputs` methods to obtain the input tensor. Both the `byte[]` and `ByteBuffer` data types are supported. You can set the data of the input tensor by calling [setData](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/mstensor.html#setdata). -1. Use the [getInputsByTensorName](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/model.html#getinputbytensorname) method to obtain the tensor connected to the input node from the model input tensor based on the name of the model input tensor. The following sample code from [MainActivity.java](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/examples/runtime_java/app/src/main/java/com/mindspore/lite/demo/MainActivity.java#L151) demonstrates how to call the `getInputByTensorName` function to obtain the input tensor and fill in data. +1. Use the [getInputByTensorName](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/model.html#getinputbytensorname) method to obtain the tensor connected to the input node from the model input tensor based on the name of the model input tensor. The following sample code from [MainActivity.java](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/examples/runtime_java/app/src/main/java/com/mindspore/lite/demo/MainActivity.java#L151) demonstrates how to call the `getInputByTensorName` function to obtain the input tensor and fill in data. ```java MSTensor inputTensor = model.getInputByTensorName("2031_2030_1_construct_wrapper:x"); diff --git a/docs/lite/docs/source_en/reference/faq.md b/docs/lite/docs/source_en/reference/faq.md index 802fadbfb3..7ee0366709 100644 --- a/docs/lite/docs/source_en/reference/faq.md +++ b/docs/lite/docs/source_en/reference/faq.md @@ -41,7 +41,7 @@ If you encounter an issue when using MindSpore Lite, you can view logs first. In ``` - Analysis: The model contains operators not supported by the MindSpore Lite converter. As a result, the conversion fails. - - Solution: For unsupported operators, add parsers by inheriting the API [NodeParser](https://mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_converter_NodeParser.html) and register the parsers by using [NodeParserRegistry](https://mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_NodeParserRegistry.html). Alternatively, commit an [issue](https://gitee.com/mindspore/mindspore/issues) to MindSpore Lite developers in the community. + - Solution: For unsupported operators, add parsers by inheriting the API [NodeParser](https://mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_converter_NodeParser.html) and register the parsers by using [NodeParserRegistry](https://mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_NodeParserRegistry.html). Alternatively, commit an [issue](https://gitee.com/mindspore/mindspore-lite/issues) to MindSpore Lite developers in the community. 3. Unsupported operators exist. The error log information is as follows: @@ -50,7 +50,7 @@ If you encounter an issue when using MindSpore Lite, you can view logs first. In ``` - Analysis: The converter supports the operator conversion, but does not support a special attribute or parameter of the operator. As a result, the model conversion fails. (The following uses caffe as an example. The log information of other frameworks is the same.) - - Solution: Add the custom operator parsers by inheriting the API [NodeParser](https://mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_converter_NodeParser.html) and register the parsers by using [NodeParserRegistry](https://mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_NodeParserRegistry.html). Alternatively, commit an [issue](https://gitee.com/mindspore/mindspore/issues) to MindSpore Lite developers in the community. + - Solution: Add the custom operator parsers by inheriting the API [NodeParser](https://mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_converter_NodeParser.html) and register the parsers by using [NodeParserRegistry](https://mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_NodeParserRegistry.html). Alternatively, commit an [issue](https://gitee.com/mindspore/mindspore-lite/issues) to MindSpore Lite developers in the community. ## Post-training Quantization Conversion Failed @@ -152,7 +152,7 @@ If you encounter an issue when using MindSpore Lite, you can view logs first. In ``` - Analysis: The input shape of the MS model contains -1, that is, the model input is a dynamic shape. During GPU inference, the operator specifications check related to the shape is skipped in the graph build phase. By default, the GPU supports this operator, and the operator specifications are checked again in the prediction phase. If the operator specifications are not supported, an error is reported and the execution exits. - - Solution: Some operators are not supported. You can modify the operator types or parameter types in the model as prompted to avoid some errors. In most cases, you need to [commit an issue](https://gitee.com/mindspore/mindspore/issues) in the MindSpore community to notify developers to fix and adapt the code. + - Solution: Some operators are not supported. You can modify the operator types or parameter types in the model as prompted to avoid some errors. In most cases, you need to [commit an issue](https://gitee.com/mindspore/mindspore-lite/issues) in the MindSpore Lite community to notify developers to fix and adapt the code. 2. Map buffer errors @@ -170,7 +170,7 @@ If you encounter an issue when using MindSpore Lite, you can view logs first. In ``` - Analysis: In the inference phase, the event check after the OpenCL operator is executed is ignored to improve the performance. However, the event check is inserted into the Enqueue class function in the OpenCL by default. If an error occurs during the execution of the OpenCL operator, an error is returned in the map phase. - - Solution: The OpenCL operator has a bug. You are advised to [commit an issue](https://gitee.com/mindspore/mindspore/issues) in the MindSpore community to notify developers to fix and adapt the code. + - Solution: The OpenCL operator has a bug. You are advised to [commit an issue](https://gitee.com/mindspore/mindspore-lite/issues) in the MindSpore Lite community to notify developers to fix and adapt the code. ### TensorRT GPU Inference Issues @@ -215,7 +215,7 @@ If you encounter an issue when using MindSpore Lite, you can view logs first. In ``` - Analysis: This error is caused by the NPU online graph construction failure. - - Solution: The graph construction is completed by calling the [HiAI DDK](https://developer.huawei.com/consumer/en/doc/development/HiAI-Library/ddk-download-0000001053590180) API. Therefore, the error is reported in the error log of HiAI. For some errors, you can modify the operator type or parameter type in the model as prompted. For most errors, you need to [commit an issue](https://gitee.com/mindspore/mindspore/issues) in the MindSpore community to notify the developers to fix and adapt the code. The following provides common HiAI error messages so that you can clearly describe the issue when asking questions in the community and improve the issue locating efficiency. + - Solution: The graph construction is completed by calling the [HiAI DDK](https://developer.huawei.com/consumer/en/doc/development/HiAI-Library/ddk-download-0000001053590180) API. Therefore, the error is reported in the error log of HiAI. For some errors, you can modify the operator type or parameter type in the model as prompted. For most errors, you need to [commit an issue](https://gitee.com/mindspore/mindspore-lite/issues) in the MindSpore Lite community to notify the developers to fix and adapt the code. The following provides common HiAI error messages so that you can clearly describe the issue when asking questions in the community and improve the issue locating efficiency. (1) Search for the keyword **E AI_FMK** in the log file. If the following error log is found before the "MS_LITE" error is reported: @@ -276,7 +276,7 @@ If you encounter an issue when using MindSpore Lite, you can view logs first. In ``` - If the accuracy of the entire network inference performed by MindSpore Lite is incorrect, you can use the [Dump function](https://mindspore.cn/lite/docs/en/r2.7.0/tools/benchmark_tool.html#dump) of the benchmark tool to save the output of the operator layer and compare the output with the inference result of the original framework to further locate the operator with incorrect accuracy. - - For operators with accuracy issues, you can download the [MindSpore source code](https://gitee.com/mindspore/mindspore) to check the operator implementation and construct the corresponding single-operator network for debugging and fault locating. You can also [commit an issue](https://gitee.com/mindspore/mindspore/issues) in the MindSpore community to MindSpore Lite developers for troubleshooting. + - For operators with accuracy issues, you can download the [MindSpore Lite source code](https://gitee.com/mindspore/mindspore-lite) to check the operator implementation and construct the corresponding single-operator network for debugging and fault locating. You can also [commit an issue](https://gitee.com/mindspore/mindspore-lite/issues) in the MindSpore Lite community to MindSpore Lite developers for troubleshooting. 2. What do I do if the FP32 inference result is correct but the FP16 inference result contains the NaN or Inf value? - If the NaN or Inf value is displayed in the result, value overflow occurs during inference. You can view the model structure, filter out the operator layer where value overflow may occur, and use the [Dump function](https://mindspore.cn/lite/docs/en/r2.7.0/tools/benchmark_tool.html#dump) of the benchmark tool to save the output of the operator layer and confirm the operator where value overflow occurs. @@ -325,7 +325,7 @@ If you encounter an issue when using MindSpore Lite, you can view logs first. In - In most cases, the inference performance of the NPU is much better than that of the CPU. In a few cases, the inference performance of the NPU is poorer than that of the CPU. (1) Check whether there are a large number of Pad or StridedSlice operators in the model. The array format of the NPU is different from that of the CPU. The operation of these operators in the NPU involves array rearrangement. Therefore, the NPU has no advantage over the CPU and even is inferior to the CPU. If you need to run such an operator on the NPU, you are advised to remove or replace the operator. - (2) Use a tool (such as adb logcat) to capture background logs and search for the keyword **BuildIRModel build successfully**. It is found that related logs appear multiple times, indicating that the model is partitioned into multiple NPU-related subgraphs during online graph construction. Generally, subgraph partitioning is caused by the existence of Transpose and/or unsupported NPU operators in the graph. Currently, a maximum of 20 subgraphs can be partitioned. The more the subgraphs, the more time the NPU takes. You are advised to refer to the [NPU operators](https://www.mindspore.cn/lite/docs/en/r2.7.0/reference/operator_list_lite.html) supported by MindSpore Lite and avoid unsupported operators during model building. Alternatively, [commit an issue](https://gitee.com/mindspore/mindspore/issues) to MindSpore Lite developers. + (2) Use a tool (such as adb logcat) to capture background logs and search for the keyword **BuildIRModel build successfully**. It is found that related logs appear multiple times, indicating that the model is partitioned into multiple NPU-related subgraphs during online graph construction. Generally, subgraph partitioning is caused by the existence of Transpose and/or unsupported NPU operators in the graph. Currently, a maximum of 20 subgraphs can be partitioned. The more the subgraphs, the more time the NPU takes. You are advised to refer to the [NPU operators](https://www.mindspore.cn/lite/docs/en/r2.7.0/reference/operator_list_lite.html) supported by MindSpore Lite and avoid unsupported operators during model building. Alternatively, [commit an issue](https://gitee.com/mindspore/mindspore-lite/issues) to MindSpore Lite developers. ## Issues Related to Using Visual Studio @@ -413,19 +413,19 @@ A: Currently the MindSpore Lite built-in memory pool has a maximum capacity lim **Q: How do I visualize the MindSpore Lite offline model (.ms file) to view the network structure?** -A: Model visualization open-source repository `Netron` supports viewing MindSpore Lite models (MindSpore >= r1.2), which can be downloaded in the [Netron](https://github.com/lutzroeder/netron). +A: Model visualization open-source repository `Netron` supports viewing MindSpore Lite models (MindSpore Lite >= r1.2), which can be downloaded in the [Netron](https://github.com/lutzroeder/netron).
-**Q: Does MindSpore have a quantized inference tool?** +**Q: Does MindSpore Lite have a quantized inference tool?** A: [MindSpore Lite](https://www.mindspore.cn/lite/en) supports the inference of the quantization aware training model on the cloud. The MindSpore Lite converter tool provides the quantization after training and weight quantization functions which are being continuously improved.
-**Q: Does MindSpore have a lightweight on-device inference engine?** +**Q: Does MindSpore Lite have a lightweight on-device inference engine?** -A:The MindSpore lightweight inference framework MindSpore Lite has been officially launched in r0.7. You are welcome to try it and give your comments. For details about the overview, tutorials, and documents, see [MindSpore Lite](https://www.mindspore.cn/lite/en). +A:The MindSpore Lite lightweight inference framework MindSpore Lite has been officially launched in r0.7. You are welcome to try it and give your comments. For details about the overview, tutorials, and documents, see [MindSpore Lite](https://www.mindspore.cn/lite/en).
diff --git a/docs/lite/docs/source_en/tools/benchmark_tool.md b/docs/lite/docs/source_en/tools/benchmark_tool.md index 258809fd3e..39d5a4eca1 100644 --- a/docs/lite/docs/source_en/tools/benchmark_tool.md +++ b/docs/lite/docs/source_en/tools/benchmark_tool.md @@ -12,7 +12,7 @@ After model conversion and before inference, you can use the Benchmark tool to p To use the Benchmark tool, you need to prepare the environment as follows: -- Compilation: Install build dependencies and perform build. The code of the Benchmark tool is stored in the `mindspore-lite/tools/benchmark` directory of the MindSpore source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#compilation-example) in the build document. +- Compilation: Install build dependencies and perform build. The code of the Benchmark tool is stored in the `mindspore-lite/tools/benchmark` directory of the MindSpore Lite source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#compilation-example) in the build document. - Run: Obtain the `benchmark` tool and configure environment variables. For details, see [Output Description](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#environment-requirements) in the build document. @@ -300,7 +300,7 @@ np.fromfile("/path/to/dump.bin", np.float32) To use the Benchmark tool, you need to prepare the environment as follows: -- Compilation: Install build dependencies and perform build. The code of the Benchmark tool is stored in the `mindspore-lite/tools/benchmark` directory of the MindSpore source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#compilation-example) in the build document. +- Compilation: Install build dependencies and perform build. The code of the Benchmark tool is stored in the `mindspore-lite/tools/benchmark` directory of the MindSpore Lite source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#compilation-example) in the build document. - Add the path of dynamic library required by the benchmark to the environment variables PATH. ````bash diff --git a/docs/lite/docs/source_en/tools/benchmark_train_tool.md b/docs/lite/docs/source_en/tools/benchmark_train_tool.md index 29080fe66a..fae870ae55 100644 --- a/docs/lite/docs/source_en/tools/benchmark_train_tool.md +++ b/docs/lite/docs/source_en/tools/benchmark_train_tool.md @@ -4,7 +4,7 @@ ## Overview -The same as `benchmark`, you can use the `benchmark_train` tool to perform benchmark testing on a MindSpore ToD (Train on Device) model. It can not only perform quantitative analysis (performance) on the execution duration the model, but also perform comparative error analysis (accuracy) based on the output of the specified model. +The same as `benchmark`, you can use the `benchmark_train` tool to perform benchmark testing on a MindSpore Lite ToD (Train on Device) model. It can not only perform quantitative analysis (performance) on the execution duration the model, but also perform comparative error analysis (accuracy) based on the output of the specified model. ## Linux Environment Usage @@ -12,7 +12,7 @@ The same as `benchmark`, you can use the `benchmark_train` tool to perform bench To use the `benchmark_train` tool, you need to prepare the environment as follows: -- Compilation: The code of the `benchmark_train` tool is stored in the `mindspore-lite/tools/benchmark_train` directory of the MindSpore source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#compilation-example) in the build document. +- Compilation: The code of the `benchmark_train` tool is stored in the `mindspore-lite/tools/benchmark_train` directory of the MindSpore Lite source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#compilation-example) in the build document. - Configure environment variables: For details, see [Output Description](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#directory-structure-1) in the build document. Suppose the absolute path of MindSpore Lite training package you build is `/path/mindspore-lite-{version}-{os}-{arch}.tar.gz`, the commands to extract the package and configure the LD_LIBRARY_PATH variable are as follows: diff --git a/docs/lite/docs/source_en/tools/cropper_tool.md b/docs/lite/docs/source_en/tools/cropper_tool.md index 443e0c7e73..770ac92c20 100644 --- a/docs/lite/docs/source_en/tools/cropper_tool.md +++ b/docs/lite/docs/source_en/tools/cropper_tool.md @@ -12,7 +12,7 @@ The operating environment of the library cropping tool is x86_64, and currently To use the Cropper tool, you need to prepare the environment as follows: -- Compilation: The code of the Cropper tool is stored in the `mindspore-lite/tools/cropper` directory of the MindSpore source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#compilation-example) in the build document to compile version x86_64. +- Compilation: The code of the Cropper tool is stored in the `mindspore-lite/tools/cropper` directory of the MindSpore Lite source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#compilation-example) in the build document to compile version x86_64. - Run: Obtain the `cropper` tool and configure environment variables. For details, see [Output Description](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#environment-requirements) in the build document. diff --git a/docs/lite/docs/source_en/tools/obfuscator_tool.md b/docs/lite/docs/source_en/tools/obfuscator_tool.md index b0556ff98c..d2a30f3da4 100644 --- a/docs/lite/docs/source_en/tools/obfuscator_tool.md +++ b/docs/lite/docs/source_en/tools/obfuscator_tool.md @@ -4,7 +4,7 @@ ## Overview -MindSpore Lite provides a lightweight offline model obfuscator to protect the confidentiality of model files deployed on the IoT devices. This tool obfuscates the network structure and operator type of the `ms` model, making the computation logic of the model difficult to understand after the obfuscation. The model generated by the obfuscator is still in `ms` format. You can directly perform inference through the runtime inference framework (The `MSLITE_ENABLE_MODEL_OBF` option in `mindspore/mindspore-lite/CMakeLists.txt` must be enabled during build). Obfuscation slightly increases the model loading latency, but does not affect the inference performance. +MindSpore Lite provides a lightweight offline model obfuscator to protect the confidentiality of model files deployed on the IoT devices. This tool obfuscates the network structure and operator type of the `ms` model, making the computation logic of the model difficult to understand after the obfuscation. The model generated by the obfuscator is still in `ms` format. You can directly perform inference through the runtime inference framework (The `MSLITE_ENABLE_MODEL_OBF` option in `mindspore-lite/mindspore-lite/CMakeLists.txt` must be enabled during build). Obfuscation slightly increases the model loading latency, but does not affect the inference performance. ## Usage in the Linux Environment diff --git a/docs/lite/docs/source_en/tools/visual_tool.md b/docs/lite/docs/source_en/tools/visual_tool.md index c2e7860280..779ba6d1e9 100644 --- a/docs/lite/docs/source_en/tools/visual_tool.md +++ b/docs/lite/docs/source_en/tools/visual_tool.md @@ -10,7 +10,7 @@ ## Functions -- Load the `.ms` models. The MindSpore version must be 1.2.0 or later. +- Load the `.ms` models. The MindSpore Lite version must be 1.2.0 or later. - Display subgraphs. - Display the topology structure and data flow `shape`. - Display the `format`, `input`, and `output` of a model. diff --git a/docs/lite/docs/source_en/train/converter_train.md b/docs/lite/docs/source_en/train/converter_train.md index cf400420c2..996ca42270 100644 --- a/docs/lite/docs/source_en/train/converter_train.md +++ b/docs/lite/docs/source_en/train/converter_train.md @@ -55,7 +55,7 @@ The output of successful conversion is as follows: CONVERT RESULT SUCCESS:0 ``` -This indicates that the MindSpore model is successfully converted to a MindSpore end-side model and a new file `my_model.ms` is generated. If the output of conversion failure is as follows: +This indicates that the MindSpore model is successfully converted to a MindSpore Lite end-side model and a new file `my_model.ms` is generated. If the output of conversion failure is as follows: ```text CONVERT RESULT FAILED: diff --git a/docs/lite/docs/source_en/train/runtime_train_cpp.md b/docs/lite/docs/source_en/train/runtime_train_cpp.md index fea78775c5..242ae70be8 100644 --- a/docs/lite/docs/source_en/train/runtime_train_cpp.md +++ b/docs/lite/docs/source_en/train/runtime_train_cpp.md @@ -24,7 +24,7 @@ The following figure shows the detailed training process: ### Reading Models -A Model file is flatbuffer-serialized file which was converted using the MindSpore Model Converter Tool. These files have a `.ms` extension. Before model training or inference, the model needs to be loaded from the file system and parsed. Related operations are mainly implemented in the [Serialization](https://www.mindspore.cn/lite/api/en/r2.7.0/api_cpp/mindspore.html) class which holds the model data such as the network structure, weights data and operators attributes. +A Model file is flatbuffer-serialized file which was converted using the MindSpore Lite Model Converter Tool. These files have a `.ms` extension. Before model training or inference, the model needs to be loaded from the file system and parsed. Related operations are mainly implemented in the [Serialization](https://www.mindspore.cn/lite/api/en/r2.7.0/api_cpp/mindspore.html) class which holds the model data such as the network structure, weights data and operators attributes. ### Creating Contexts @@ -112,7 +112,7 @@ The example allows the user to define the training data processing flow by calli ## Executing Training -MindSpore has provided some off-the-shelf callback classes for users (e.g., `AccuracyMetrics`, `CkptSaver`, `TrainAccuracy`, `LossMonitor` and `Metrics`). The function `Train` and `Evaluate` of the class `Model` can set the model to the training or evaluation mode separately, specify the methods of the data processing and monitor the session status. +MindSpore Lite has provided some off-the-shelf callback classes for users (e.g., `AccuracyMetrics`, `CkptSaver`, `TrainAccuracy`, `LossMonitor` and `Metrics`). The function `Train` and `Evaluate` of the class `Model` can set the model to the training or evaluation mode separately, specify the methods of the data processing and monitor the session status. ### Training diff --git a/docs/lite/docs/source_en/train/runtime_train_java.md b/docs/lite/docs/source_en/train/runtime_train_java.md index 01c1d78c39..9a1a11f398 100644 --- a/docs/lite/docs/source_en/train/runtime_train_java.md +++ b/docs/lite/docs/source_en/train/runtime_train_java.md @@ -24,7 +24,7 @@ The following figure shows the detailed training process: ### Reading Models -A Model file is flatbuffer-serialized file which was converted using the MindSpore Model Converter Tool. These files have a `.ms` extension. Before model training and/or inference, the model needs to be loaded from the file system and parsed. Related operations are mainly implemented in the [Graph](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/graph.html#graph) class which holds the model data such as the network structure, weights data and operators attributes. +A Model file is flatbuffer-serialized file which was converted using the MindSpore Lite Model Converter Tool. These files have a `.ms` extension. Before model training and/or inference, the model needs to be loaded from the file system and parsed. Related operations are mainly implemented in the [Graph](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/graph.html#graph) class which holds the model data such as the network structure, weights data and operators attributes. ### Creating Contexts diff --git a/docs/lite/docs/source_en/train/train_lenet.md b/docs/lite/docs/source_en/train/train_lenet.md index c218171c5f..4fb184c049 100644 --- a/docs/lite/docs/source_en/train/train_lenet.md +++ b/docs/lite/docs/source_en/train/train_lenet.md @@ -2,7 +2,7 @@ [![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/train/train_lenet.md) -> MindSpore has unified the end-to-side cloud inference API. If you want to continue to use the MindSpore Lite independent API for training, you can refer to [here](https://www.mindspore.cn/lite/docs/en/r1.3/quick_start/train_lenet.html). +> MindSpore Lite has unified the end-to-side cloud inference API. If you want to continue to use the MindSpore Lite independent API for training, you can refer to [here](https://www.mindspore.cn/lite/docs/en/r1.3/quick_start/train_lenet.html). ## Overview @@ -10,7 +10,7 @@ This tutorial is based on [LeNet training example code](https://gitee.com/mindsp The completed training procedure is as follows: -1. Constructing your training model based on MindSpore Lite Architecture and Export it into `MindIR` model file. +1. Constructing your training model based on MindSpore Architecture and Export it into `MindIR` model file. 2. Converting `MindIR` model file to the `MS` ToD model file by using MindSpore Lite `Converter` tool. 3. Loading `MS` model file and executing model training by calling MindSpore Lite training API. @@ -64,8 +64,8 @@ MindSpore can be installed by source code or using `pip`. Refer to [MindSpore in Use `git` to clone the source code, the command in `Linux` is as follows: ```bash -git clone https://gitee.com/mindspore/mindspore.git -b {version} -cd ./mindspore +git clone https://gitee.com/mindspore/mindspore-lite.git -b {version} +cd ./mindspore-lite ``` The `mindspore-lite/examples/train_lenet_cpp` directory relative to the MindSpore Lite source code contains this demo's source code. The version is consistent with that of [MindSpore Lite Download Page](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) below. If -b the master is specified, you need to obtain the corresponding installation package through [compile from source](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html). diff --git a/docs/lite/docs/source_en/train/train_lenet_java.md b/docs/lite/docs/source_en/train/train_lenet_java.md index b32967855a..64f8e38c87 100644 --- a/docs/lite/docs/source_en/train/train_lenet_java.md +++ b/docs/lite/docs/source_en/train/train_lenet_java.md @@ -20,13 +20,13 @@ This tutorial demonstrates how to use the Java API on MindSpore Lite by building - [OpenJDK](https://openjdk.java.net/install/) 1.8 to 1.15 -### Downloading MindSpore and Building the Java Package for On-device Training +### Downloading MindSpore Lite and Building the Java Package for On-device Training Clone the source code and build the Java package for MindSpore Lite training. The `Linux` command is as follows: ```bash -git clone -b v2.7.0 https://gitee.com/mindspore/mindspore.git -cd mindspore +git clone -b v2.7.0 https://gitee.com/mindspore/mindspore-lite.git +cd mindspore-lite bash build.sh -I x86_64 -j8 ``` @@ -60,7 +60,7 @@ MNIST_Data/ 1. Go to the directory where the sample project is located and execute the sample project. The commands are as follows: ```bash - cd /codes/mindspore/mindspore-lite/examples/train_lenet_java + cd /codes/mindspore-lite/mindspore-lite/examples/train_lenet_java ./prepare_and_run.sh -D /PATH/MNIST_Data/ -r ../../../../output/mindspore-lite-${version}-linux-x64.tar.gz ``` diff --git a/docs/lite/docs/source_zh_cn/advanced/micro.md b/docs/lite/docs/source_zh_cn/advanced/micro.md index 8a91a2a4c7..be3b2384e9 100644 --- a/docs/lite/docs/source_zh_cn/advanced/micro.md +++ b/docs/lite/docs/source_zh_cn/advanced/micro.md @@ -32,7 +32,7 @@ MindSpore Lite针对MCUs部署硬件后端,提供了一种超轻量Micro AI部 可以通过两种方式获取转换工具: - - MindSpore官网下载[Release版本](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/use/downloads.html)。 + - MindSpore Lite官网下载[Release版本](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/use/downloads.html)。 用户需下载操作系统为Linux-x86_64,硬件平台为CPU的发布包。 @@ -489,7 +489,7 @@ target_device=DSP 在生成模型推理代码之后,用户在对代码进行集成开发之前,需要获得生成的推理代码所依赖的`Micro`库。 不同平台的推理代码依赖对应平台的`Micro`库,用户需根据使用的平台,在生成代码时,通过Micro配置项`target`指定该平台,并在获取`Micro`库时,获得该平台的`Micro`库。 -用户可通过MindSpore官网下载对应平台的[Release版本](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/use/downloads.html)。 +用户可通过MindSpore Lite官网下载对应平台的[Release版本](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/use/downloads.html)。 在[模型推理代码生成](#模型推理代码生成)章节,我们得到了x86_64架构Linux平台的模型推理代码,而该代码所依赖的`Micro`库,就在转换工具所使用的发布包内。 发布包内,推理代码所依赖的库和头文件如下: @@ -616,7 +616,7 @@ mnist # 指定的生成代码根目录名称 STM32F767芯片为Cortex-M7架构,可以通过以下两种方式获取该架构的`Micro`库: -- MindSpore官网下载[Release版本](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/use/downloads.html)。 +- MindSpore Lite官网下载[Release版本](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/use/downloads.html)。 用户需下载操作系统为None,硬件平台为Cortex-M7的发布包。 @@ -624,7 +624,7 @@ STM32F767芯片为Cortex-M7架构,可以通过以下两种方式获取该架 用户可通过`MSLITE_MICRO_PLATFORM=cortex-m7 bash build.sh -I x86_64`命令,来编译得到`Cortex-M7`的发布包。 -对于暂未提供发布包进行下载的其他Cortex-M架构平台,用户可参考从源码编译构建的方式,修改MindSpore源码,进行手动编译,得到发布包。 +对于暂未提供发布包进行下载的其他Cortex-M架构平台,用户可参考从源码编译构建的方式,修改MindSpore Lite源码,进行手动编译,得到发布包。 ### 在Windows上的代码集成及编译部署:通过IAR进行集成开发 @@ -1222,7 +1222,7 @@ changeable_weights_name=name0,name1 ### 训练导出推理模型的权重 -MindSpore的Serialization类提供了ExportWeightsCollaborateWithMicro函数,ExportWeightsCollaborateWithMicro原型如下: +MindSpore Lite的Serialization类提供了ExportWeightsCollaborateWithMicro函数,ExportWeightsCollaborateWithMicro原型如下: ```cpp static Status ExportWeightsCollaborateWithMicro(const Model &model, ModelType model_type, diff --git a/docs/lite/docs/source_zh_cn/advanced/third_party/converter_register.md b/docs/lite/docs/source_zh_cn/advanced/third_party/converter_register.md index 6b6beaad5f..693ec34e8c 100644 --- a/docs/lite/docs/source_zh_cn/advanced/third_party/converter_register.md +++ b/docs/lite/docs/source_zh_cn/advanced/third_party/converter_register.md @@ -10,9 +10,9 @@ MindSpore Lite的[转换工具](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/ 节点解析扩展:用户自定义模型中某一节点的解析过程,支持ONNX、CAFFE、TF、TFLITE。接口可参考[NodeParser](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#nodeparser)、[NodeParserRegistry](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#nodeparserregistry)。 模型解析扩展:用户自定义模型的整个解析过程,支持ONNX、CAFFE、TF、TFLITE。接口可参考[ModelParser](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#modelparser)、[ModelParserRegistry](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#modelparserregistry)。 -图优化扩展:模型解析之后,将获得MindSpore定义的图结构,用户可基于此结构自定义图的优化过程。接口可参考[PassBase](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#passbase)、[PassPosition](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#passposition)、[PassRegistry](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#passregistry)。 +图优化扩展:模型解析之后,将获得MindSpore Lite定义的图结构,用户可基于此结构自定义图的优化过程。接口可参考[PassBase](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#passbase)、[PassPosition](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#passposition)、[PassRegistry](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#passregistry)。 -> 节点解析扩展需要依赖flatbuffers和protobuf及三方框架的序列化文件,并且flatbuffers和protobuf需要与发布件采用的版本一致,序列化文件需保证兼容发布件采用的序列化文件。发布件中不提供flatbuffers、protobuf及序列化文件,用户需自行编译,并生成序列化文件。用户可以从[MindSpore仓](https://gitee.com/mindspore/mindspore/tree/v2.7.0)中获取[flatbuffers](https://gitee.com/mindspore/mindspore/blob/v2.7.0/cmake/external_libs/flatbuffers.cmake)、[probobuf](https://gitee.com/mindspore/mindspore/blob/v2.7.0/cmake/external_libs/protobuf.cmake)、[ONNX原型文件](https://gitee.com/mindspore/mindspore/tree/v2.7.0/third_party/proto/onnx)、[CAFFE原型文件](https://gitee.com/mindspore/mindspore/tree/v2.7.0/third_party/proto/caffe)、[TF原型文件](https://gitee.com/mindspore/mindspore/tree/v2.7.0/third_party/proto/tensorflow)和[TFLITE原型文件](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/tools/converter/parser/tflite/schema.fbs)。 +> 节点解析扩展需要依赖flatbuffers和protobuf及三方框架的序列化文件,并且flatbuffers和protobuf需要与发布件采用的版本一致,序列化文件需保证兼容发布件采用的序列化文件。发布件中不提供flatbuffers、protobuf及序列化文件,用户需自行编译,并生成序列化文件。用户可以从[MindSpore Lite仓](https://gitee.com/mindspore/mindspore-lite/tree/v2.7.0)中获取[flatbuffers](https://gitee.com/mindspore/mindspore-lite/blob/v2.7.0/cmake/external_libs/flatbuffers.cmake)、[probobuf](https://gitee.com/mindspore/mindspore-lite/blob/v2.7.0/cmake/external_libs/protobuf.cmake)、[ONNX原型文件](https://gitee.com/mindspore/mindspore-lite/tree/v2.7.0/third_party/proto/onnx)、[CAFFE原型文件](https://gitee.com/mindspore/mindspore-lite/tree/v2.7.0/third_party/proto/caffe)、[TF原型文件](https://gitee.com/mindspore/mindspore-lite/tree/v2.7.0/third_party/proto/tensorflow)和[TFLITE原型文件](https://gitee.com/mindspore/mindspore-lite/blob/v2.7.0/mindspore-lite/tools/converter/parser/tflite/schema.fbs)。 > > MindSpore Lite还提供了一系列的注册宏,以便于用户侧的扩展接入转换工具。注册宏包括节点解析注册[REG_NODE_PARSER](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#reg-node-parser)、模型解析注册[REG_MODEL_PARSER](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#reg-model-parser)、图优化注册[REG_PASS](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#reg-pass)、图优化调度注册[REG_SCHEDULED_PASS](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#reg-scheduled-pass)。 @@ -49,7 +49,7 @@ REG_NODE_PARSER(kFmkTypeTflite, ADD, std::make_shared()); ## 模型扩展 -示例代码请参考MindSpore仓模型扩展的单元案例[ModelParserRegistryTest](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/test/ut/tools/converter/registry/model_parser_registry_test.cc)。 +示例代码请参考MindSpore Lite仓模型扩展的单元案例[ModelParserRegistryTest](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/test/ut/tools/converter/registry/model_parser_registry_test.cc)。 ### 优化扩展 @@ -94,7 +94,7 @@ REG_SCHEDULED_PASS(POSITION_BEGIN, {"PassTutorial"}) // 注册调度逻辑 MindSpore Lite的发布件不会提供其他框架下的序列化文件,因此,用户需自行编译获得,请参考[概述](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/advanced/third_party/converter_register.html#概述)。 - 本示例采用的是tflite模型,用户需编译[flatbuffers](https://gitee.com/mindspore/mindspore/blob/v2.7.0/cmake/external_libs/flatbuffers.cmake),从[MindSpore仓](https://gitee.com/mindspore/mindspore/tree/v2.7.0)中获取[TFLITE原型文件](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/tools/converter/parser/tflite/schema.fbs),最终生成tflite的序列化文件。 + 本示例采用的是tflite模型,用户需编译[flatbuffers](https://gitee.com/mindspore/mindspore-lite/blob/v2.7.0/cmake/external_libs/flatbuffers.cmake),从[MindSpore Lite仓](https://gitee.com/mindspore/mindspore-lite/tree/v2.7.0)中获取[TFLITE原型文件](https://gitee.com/mindspore/mindspore-lite/blob/v2.7.0/mindspore-lite/tools/converter/parser/tflite/schema.fbs),最终生成tflite的序列化文件。 在`mindspore-lite/examples/converter_extend`目录下创建`schema`文件目录,继而将生成的序列化文件置于`schema`目录下。 diff --git a/docs/lite/docs/source_zh_cn/advanced/third_party/npu_info.md b/docs/lite/docs/source_zh_cn/advanced/third_party/npu_info.md index 14b8e5c38a..0aeb73d51a 100644 --- a/docs/lite/docs/source_zh_cn/advanced/third_party/npu_info.md +++ b/docs/lite/docs/source_zh_cn/advanced/third_party/npu_info.md @@ -12,7 +12,7 @@ DDK包含了使用NPU的对外接口(包括模型构建、加载,计算等 ### 编译构建 -在Linux环境下,使用MindSpore[源代码](https://gitee.com/mindspore/mindspore)根目录下的build.sh脚本可以构建集成NPU的MindSpore Lite包,命令如下,它将在MindSpore源代码根目录下的output目录下构建出MindSpore Lite的包,其中包含NPU的动态库,libmindspore-lite动态库以及测试工具Benchmark。 +在Linux环境下,使用MindSpore[源代码](https://gitee.com/mindspore/mindspore-lite)根目录下的build.sh脚本可以构建集成NPU的MindSpore Lite包,命令如下,它将在MindSpore Lite源代码根目录下的output目录下构建出MindSpore Lite的包,其中包含NPU的动态库,libmindspore-lite动态库以及测试工具Benchmark。 ```bash export MSLITE_ENABLE_NPU=ON diff --git a/docs/lite/docs/source_zh_cn/advanced/third_party/tensorrt_info.md b/docs/lite/docs/source_zh_cn/advanced/third_party/tensorrt_info.md index 627c088402..de55670c5e 100644 --- a/docs/lite/docs/source_zh_cn/advanced/third_party/tensorrt_info.md +++ b/docs/lite/docs/source_zh_cn/advanced/third_party/tensorrt_info.md @@ -14,7 +14,7 @@ ### 编译构建 -在Linux环境下,使用MindSpore[源代码](https://gitee.com/mindspore/mindspore)根目录下的build.sh脚本可以构建集成TensorRT的MindSpore Lite包,先配置环境变量`MSLITE_GPU_BACKEND=tensorrt`,再执行编译命令如下,它将在MindSpore源代码根目录下的output目录下构建出MindSpore Lite的包,其中包含`libmindspore-lite.so`以及测试工具Benchmark。 +在Linux环境下,使用MindSpore Lite[源代码](https://gitee.com/mindspore/mindspore-lite)根目录下的build.sh脚本可以构建集成TensorRT的MindSpore Lite包,先配置环境变量`MSLITE_GPU_BACKEND=tensorrt`,再执行编译命令如下,它将在MindSpore Lite源代码根目录下的output目录下构建出MindSpore Lite的包,其中包含`libmindspore-lite.so`以及测试工具Benchmark。 ```bash bash build.sh -I x86_64 diff --git a/docs/lite/docs/source_zh_cn/converter/converter_tool.md b/docs/lite/docs/source_zh_cn/converter/converter_tool.md index 1359c2459a..015166ace6 100644 --- a/docs/lite/docs/source_zh_cn/converter/converter_tool.md +++ b/docs/lite/docs/source_zh_cn/converter/converter_tool.md @@ -86,7 +86,7 @@ MindSpore Lite模型转换工具提供了多种参数设置,用户可根据需 > - `configFile`配置文件采用`key=value`的方式定义相关参数,量化相关的配置参数详见[量化](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/advanced/quantization.html),扩展功能相关的配置参数详见[扩展配置](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/advanced/third_party/converter_register.html#扩展配置)。 > - `--optimize`该参数是用来设定在离线转换的过程中需要完成哪些特定的优化。如果该参数设置为none,那么在模型的离线转换阶段将不进行相关的图优化操作,相关的图优化操作将会在执行推理阶段完成。该参数的优点在于转换出来的模型由于没有经过特定的优化,可以直接部署到CPU/GPU/Ascend任意硬件后端;而带来的缺点是推理执行时模型的初始化时间增长。如果设置成general,表示离线转换过程会完成通用优化,包括常量折叠,算子融合等(转换出的模型只支持CPU/GPU后端,不支持Ascend后端)。如果设置成gpu_oriented,表示转换过程中会完成通用优化和针对GPU后端的额外优化(转换出来的模型只支持GPU后端)。如果设置成ascend_oriented,表示转换过程中只完成针对Ascend后端的优化(转换出来的模型只支持Ascend后端)。 > - 加解密功能仅在[编译](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html)时设置为`MSLITE_ENABLE_MODEL_ENCRYPTION=on`时生效,并且仅支持Linux x86平台。其中密钥为十六进制表示的字符串,Linux平台用户可以使用`xxd`工具对字节表示的密钥进行十六进制表达转换。 - 需要注意的是,加解密算法在1.7版本进行了更新,导致新版的converter工具不支持对1.6及其之前版本的MindSpore加密导出的模型进行转换。 + 需要注意的是,加解密算法在1.7版本进行了更新,导致新版的converter工具不支持对1.6及其之前版本的MindSpore Lite加密导出的模型进行转换。 > - `--input_shape`参数以及dynamicDims参数在转换时会被存入模型中,在使用模型时可以调用model.get_model_info("input_shape")以及model.get_model_info("dynamic_dims")来获取。 ### CPU模型编译优化 @@ -178,6 +178,8 @@ MindSpore Lite模型转换工具提供了多种参数设置,用户可根据需 使用MindSpore Lite模型转换工具,需要进行如下环境准备工作。 +- Windows转换工具基于mingw-64编译,依赖相关动态库,须安装[mingw-w64](https://www.mingw-w64.org/downloads/)。 + - [编译](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html)或[下载](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/use/downloads.html)模型转换工具。 - 将转换工具需要的动态链接库加入环境变量PATH。 diff --git a/docs/lite/docs/source_zh_cn/infer/image_segmentation.md b/docs/lite/docs/source_zh_cn/infer/image_segmentation.md index b354519327..f017a9d140 100644 --- a/docs/lite/docs/source_zh_cn/infer/image_segmentation.md +++ b/docs/lite/docs/source_zh_cn/infer/image_segmentation.md @@ -6,7 +6,7 @@ 推荐用户从端侧Android图像分割demo入手,了解MindSpore Lite应用工程的构建、依赖项配置以及相关Java API的使用。 -本教程基于MindSpore团队提供的Android“端侧图像分割”示例程序,演示了端侧部署的流程。 +本教程基于MindSpore Lite团队提供的Android“端侧图像分割”示例程序,演示了端侧部署的流程。 ## 选择模型 @@ -101,7 +101,7 @@ app ### 配置MindSpore Lite依赖项 -Android调用MindSpore Android AAR时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html)生成`mindspore-lite-maven-{version}.zip`库文件包并解压缩(包含`mindspore-lite-{version}.aar`库文件)。 +Android调用MindSpore Lite Android AAR时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html)生成`mindspore-lite-maven-{version}.zip`库文件包并解压缩(包含`mindspore-lite-{version}.aar`库文件)。 > version:输出件版本号,与所编译的分支代码对应的版本一致。 @@ -151,9 +151,9 @@ Android调用MindSpore Android AAR时,需要相关库文件支持。可通过M } ``` -2. 将输入图片转换为传入MindSpore模型的Tensor格式。 +2. 将输入图片转换为传入MindSpore Lite模型的Tensor格式。 - 将待检测图片数据转换为输入MindSpore模型的Tensor。 + 将待检测图片数据转换为输入MindSpore Lite模型的Tensor。 ```java List inputs = model.getInputs(); diff --git a/docs/lite/docs/source_zh_cn/infer/quick_start.md b/docs/lite/docs/source_zh_cn/infer/quick_start.md index 72262cb912..ddb70ae41c 100644 --- a/docs/lite/docs/source_zh_cn/infer/quick_start.md +++ b/docs/lite/docs/source_zh_cn/infer/quick_start.md @@ -6,7 +6,7 @@ 我们推荐你从端侧Android图像分类demo入手,了解MindSpore Lite应用工程的构建、依赖项配置以及相关API的使用。 -本教程基于MindSpore团队提供的Android“端侧图像分类”示例程序,演示了端侧部署的流程。 +本教程基于MindSpore Lite团队提供的Android“端侧图像分类”示例程序,演示了端侧部署的流程。 1. 选择图像分类模型。 2. 将模型转换成MindSpore Lite模型格式。 @@ -38,7 +38,7 @@ call converter_lite --fmk=MINDIR --modelFile=mobilenetv2.mindir --outputFile=mob ## 部署应用 -接下来介绍如何构建和执行mindspore Lite端侧图像分类任务。 +接下来介绍如何构建和执行MindSpore Lite端侧图像分类任务。 ### 运行依赖 @@ -122,7 +122,7 @@ app ### 配置MindSpore Lite依赖项 -Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html)生成`mindspore-lite-{version}-android-{arch}.tar.gz`库文件包并解压缩(包含`libmindspore-lite.so`库文件和相关头文件),在本例中需使用生成带图像预处理模块的编译命令。 +Android JNI层调用MindSpore Lite C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html)生成`mindspore-lite-{version}-android-{arch}.tar.gz`库文件包并解压缩(包含`libmindspore-lite.so`库文件和相关头文件),在本例中需使用生成带图像预处理模块的编译命令。 > version:输出件版本号,与所编译的分支代码对应的版本一致。 > @@ -273,9 +273,9 @@ target_link_libraries( # Specifies the target library. } ``` -2. 将输入图片转换为传入MindSpore模型的Tensor格式。 +2. 将输入图片转换为传入MindSpore Lite模型的Tensor格式。 - - 将待检测图片`srcBitmap`进行尺寸裁剪并转换为LiteMat格式`lite_norm_mat_cut`。对其宽高以及通道数信息转换成float格式数据`dataHWC`。最终把`dataHWC`拷贝到MindSpore模型的Tensor输入`inTensor`中。 + - 将待检测图片`srcBitmap`进行尺寸裁剪并转换为LiteMat格式`lite_norm_mat_cut`。对其宽高以及通道数信息转换成float格式数据`dataHWC`。最终把`dataHWC`拷贝到MindSpore Lite模型的Tensor输入`inTensor`中。 ```cpp void **labelEnv = reinterpret_cast(netEnv); @@ -346,7 +346,7 @@ target_link_libraries( # Specifies the target library. auto status = mModel->Predict(msInputs, &outputs); ``` - - 获取对MindSpore模型的Tensor输出`msOutputs`。通过`msOutputs`以及分类数组信息,计算得到在APP中显示的文本信息`resultCharData`。 + - 获取对MindSpore Lite模型的Tensor输出`msOutputs`。通过`msOutputs`以及分类数组信息,计算得到在APP中显示的文本信息`resultCharData`。 ```cpp auto names = mModel->GetOutputTensorNames(); diff --git a/docs/lite/docs/source_zh_cn/infer/quick_start_c.md b/docs/lite/docs/source_zh_cn/infer/quick_start_c.md index ebf3d78c4a..c17618c95e 100644 --- a/docs/lite/docs/source_zh_cn/infer/quick_start_c.md +++ b/docs/lite/docs/source_zh_cn/infer/quick_start_c.md @@ -70,11 +70,11 @@ - 编译构建 - - 库下载:请手动下载硬件平台为CPU、操作系统为Windows-x64的MindSpore Lite模型推理框架[mindspore-lite-{version}-win-x64.zip](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/use/downloads.html),将解压后`runtime\lib`目录下的所有文件拷贝到`mindspore\lite\examples\quick_start_c\lib`工程目录、`runtime\include`目录里的文件拷贝到`mindspore\lite\examples\quick_start_c\include`工程目录下。(注意:工程项目下的`lib`、`include`目录需手工创建) + - 库下载:请手动下载硬件平台为CPU、操作系统为Windows-x64的MindSpore Lite模型推理框架[mindspore-lite-{version}-win-x64.zip](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/use/downloads.html),将解压后`runtime\lib`目录下的所有文件拷贝到`mindspore-lite\examples\quick_start_c\lib`工程目录、`runtime\include`目录里的文件拷贝到`mindspore-lite\examples\quick_start_c\include`工程目录下。(注意:工程项目下的`lib`、`include`目录需手工创建) - - 模型下载:请手动下载相关模型文件[mobilenetv2.ms](https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms),并将其拷贝到`mindspore\lite\examples\quick_start_c\model`目录。 + - 模型下载:请手动下载相关模型文件[mobilenetv2.ms](https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms),并将其拷贝到`mindspore-lite\examples\quick_start_c\model`目录。 - - 编译:在`mindspore\lite\examples\quick_start_c`目录下执行[build脚本](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/examples/quick_start_c/build.bat),将能够自动下载相关文件并编译Demo。 + - 编译:在`mindspore-lite\examples\quick_start_c`目录下执行[build脚本](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/examples/quick_start_c/build.bat),将能够自动下载相关文件并编译Demo。 ```bash call build.bat @@ -82,7 +82,7 @@ - 执行推理 - 编译构建后,进入`mindspore\lite\examples\quick_start_c\build`目录,并执行以下命令,体验MindSpore Lite推理MobileNetV2模型。 + 编译构建后,进入`mindspore-lite\examples\quick_start_c\build`目录,并执行以下命令,体验MindSpore Lite推理MobileNetV2模型。 ```bash set PATH=..\lib;%PATH% diff --git a/docs/lite/docs/source_zh_cn/infer/quick_start_cpp.md b/docs/lite/docs/source_zh_cn/infer/quick_start_cpp.md index e16d4b68d0..e2b7f057ad 100644 --- a/docs/lite/docs/source_zh_cn/infer/quick_start_cpp.md +++ b/docs/lite/docs/source_zh_cn/infer/quick_start_cpp.md @@ -2,7 +2,7 @@ [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_zh_cn/infer/quick_start_cpp.md) -> MindSpore已经统一了端边云推理API,如您想继续使用MindSpore Lite独立API进行端侧推理,可以参考[此文档](https://www.mindspore.cn/lite/docs/zh-CN/r1.3/quick_start/quick_start_cpp.html)。 +> MindSpore Lite已经统一了端边云推理API,如您想继续使用MindSpore Lite独立API进行端侧推理,可以参考[此文档](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/infer/quick_start_cpp.html)。 ## 概述 @@ -73,11 +73,11 @@ - 编译构建 - - 库下载:请手动下载硬件平台为CPU、操作系统为Windows-x64的MindSpore Lite模型推理框架[mindspore-lite-{version}-win-x64.zip](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/use/downloads.html),将解压后`runtime\lib`目录下的所有文件拷贝到`mindspore\lite\examples\quick_start_cpp\lib`工程目录、`runtime\include`目录里的文件拷贝到`mindspore\lite\examples\quick_start_cpp\include`工程目录下。(注意:工程项目下的`lib`、`include`目录需手工创建) + - 库下载:请手动下载硬件平台为CPU、操作系统为Windows-x64的MindSpore Lite模型推理框架[mindspore-lite-{version}-win-x64.zip](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/use/downloads.html),将解压后`runtime\lib`目录下的所有文件拷贝到`mindspore-lite\examples\quick_start_cpp\lib`工程目录、`runtime\include`目录里的文件拷贝到`mindspore-lite\examples\quick_start_cpp\include`工程目录下。(注意:工程项目下的`lib`、`include`目录需手工创建) - - 模型下载:请手动下载相关模型文件[mobilenetv2.ms](https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms),并将其拷贝到`mindspore\lite\examples\quick_start_cpp\model`目录。 + - 模型下载:请手动下载相关模型文件[mobilenetv2.ms](https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms),并将其拷贝到`mindspore-lite\examples\quick_start_cpp\model`目录。 - - 编译:在`mindspore\lite\examples\quick_start_cpp`目录下执行[build脚本](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/examples/quick_start_cpp/build.bat),将能够自动下载相关文件并编译Demo。 + - 编译:在`mindspore-lite\examples\quick_start_cpp`目录下执行[build脚本](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/examples/quick_start_cpp/build.bat),将能够自动下载相关文件并编译Demo。 ```bash call build.bat @@ -85,7 +85,7 @@ - 执行推理 - 编译构建后,进入`mindspore\lite\examples\quick_start_cpp\build`目录,并执行以下命令,体验MindSpore Lite推理MobileNetV2模型。 + 编译构建后,进入`mindspore-lite\examples\quick_start_cpp\build`目录,并执行以下命令,体验MindSpore Lite推理MobileNetV2模型。 ```bash set PATH=..\lib;%PATH% diff --git a/docs/lite/docs/source_zh_cn/infer/runtime_cpp.md b/docs/lite/docs/source_zh_cn/infer/runtime_cpp.md index b66d79a052..6245356f3f 100644 --- a/docs/lite/docs/source_zh_cn/infer/runtime_cpp.md +++ b/docs/lite/docs/source_zh_cn/infer/runtime_cpp.md @@ -2,7 +2,7 @@ [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_zh_cn/infer/runtime_cpp.md) -> MindSpore已经统一了端边云推理API,如您想继续使用MindSpore Lite独立API进行端侧推理,可以参考[此文档](https://www.mindspore.cn/lite/docs/zh-CN/r1.3/use/runtime_cpp.html)。 +> MindSpore Lite已经统一了端边云推理API,如您想继续使用MindSpore Lite独立API进行端侧推理,可以参考[此文档](https://www.mindspore.cn/lite/docs/zh-CN/r1.3/use/runtime_cpp.html)。 ## 概述 @@ -318,7 +318,7 @@ MindSpore Lite提供两种方法来获取模型的输入Tensor。 // Users need to free input_buf. ``` -> MindSpore Lite的模型输入Tensor中的数据排布必须是`NHWC`。如果需要了解更多数据前处理过程,可参考基于JNI接口的Android应用开发中[编写端侧推理代码](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/infer/quick_start.html#编写端侧推理代码)的第2步,将输入图片转换为传入MindSpore模型的Tensor格式。 +> MindSpore Lite的模型输入Tensor中的数据排布必须是`NHWC`。如果需要了解更多数据前处理过程,可参考基于JNI接口的Android应用开发中[编写端侧推理代码](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/infer/quick_start.html#编写端侧推理代码)的第2步,将输入图片转换为传入MindSpore Lite模型的Tensor格式。 > > [GetInputs](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#getinputs)和[GetInputByTensorName](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#getinputbytensorname)方法返回的数据不需要用户释放。 diff --git a/docs/lite/docs/source_zh_cn/infer/runtime_java.md b/docs/lite/docs/source_zh_cn/infer/runtime_java.md index f93fd77cb1..c04f431b0d 100644 --- a/docs/lite/docs/source_zh_cn/infer/runtime_java.md +++ b/docs/lite/docs/source_zh_cn/infer/runtime_java.md @@ -134,12 +134,12 @@ boolean ret = model.build(filePath, ModelType.MT_MINDIR, msContext); ## 输入数据 -MindSpore Lite Java接口提供`getInputsByTensorName`以及`getInputs`两种方法获得输入Tensor,同时支持`byte[]`或者`ByteBuffer`两种类型的数据,通过[setData](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/mstensor.html#setdata)设置输入Tensor的数据。 +MindSpore Lite Java接口提供`getInputByTensorName`以及`getInputs`两种方法获得输入Tensor,同时支持`byte[]`或者`ByteBuffer`两种类型的数据,通过[setData](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/mstensor.html#setdata)设置输入Tensor的数据。 -1. 使用[getInputsByTensorName](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/model.html#getinputsbytensorname)方法,根据模型输入Tensor的名称来获取模型输入Tensor中连接到输入节点的Tensor,下面[示例代码](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/examples/runtime_java/app/src/main/java/com/mindspore/lite/demo/MainActivity.java)演示如何调用`getInputsByTensorName`获得输入Tensor并填充数据。 +1. 使用[getInputByTensorName](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/model.html#getinputbytensorname)方法,根据模型输入Tensor的名称来获取模型输入Tensor中连接到输入节点的Tensor,下面[示例代码](https://gitee.com/mindspore/mindspore-lite/blob/r2.7/mindspore-lite/examples/runtime_java/app/src/main/java/com/mindspore/lite/demo/MainActivity.java)演示如何调用`getInputByTensorName`获得输入Tensor并填充数据。 ```java - MSTensor inputTensor = model.getInputsByTensorName("2031_2030_1_construct_wrapper:x"); + MSTensor inputTensor = model.getInputByTensorName("2031_2030_1_construct_wrapper:x"); // Set Input Data. inputTensor.setData(inputData); ``` diff --git a/docs/lite/docs/source_zh_cn/reference/faq.md b/docs/lite/docs/source_zh_cn/reference/faq.md index 371c085bd1..8391e337e0 100644 --- a/docs/lite/docs/source_zh_cn/reference/faq.md +++ b/docs/lite/docs/source_zh_cn/reference/faq.md @@ -41,7 +41,7 @@ ``` - 问题分析:模型中存在MindSpore Lite转换工具不支持的算子导致转换失败。 - - 解决方法:对于不支持的算子可以尝试通过继承API接口[NodeParser](https://mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#nodeparser) 自行添加parser并通过[NodeParserRegistry](https://mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#nodeparserregistry) 进行Parser注册;或者在社区提[ISSUE](https://gitee.com/mindspore/mindspore/issues) 给MindSpore Lite开发人员处理。 + - 解决方法:对于不支持的算子可以尝试通过继承API接口[NodeParser](https://mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#nodeparser) 自行添加parser并通过[NodeParserRegistry](https://mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#nodeparserregistry) 进行Parser注册;或者在社区提[ISSUE](https://gitee.com/mindspore/mindspore-lite/issues) 给MindSpore Lite开发人员处理。 3. 存在不支持的算子,日志报错信息: @@ -50,7 +50,7 @@ ``` - 问题分析:转换工具支持该算子转换,但是不支持该算子的某种特殊属性或参数导致模型转换失败(示例日志以caffe为例,其他框架日志信息相同)。 - - 解决方法:可以尝试通过继承API接口[NodeParser](https://mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#nodeparser) 添加自定义算子parser并通过[NodeParserRegistry](https://mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#nodeparserregistry) 进行Parser注册;或者在社区提[ISSUE](https://gitee.com/mindspore/mindspore/issues) 给MindSpore Lite开发人员处理。 + - 解决方法:可以尝试通过继承API接口[NodeParser](https://mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#nodeparser) 添加自定义算子parser并通过[NodeParserRegistry](https://mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#nodeparserregistry) 进行Parser注册;或者在社区提[ISSUE](https://gitee.com/mindspore/mindspore-lite/issues) 给MindSpore Lite开发人员处理。 ## 训练后量化转换失败 @@ -152,7 +152,7 @@ ``` - 问题分析:ms模型的输入shape包含-1,即模型输入为动态shape,GPU推理时在图编译阶段会跳过和Shape相关的算子规格检查,默认GPU支持该算子,并在Predict阶段会再次进行算子规格检查,如果算子规格检查为不支持,则报错退出。 - - 解决方法:由于存在不支持的GPU算子,部分报错用户可根据提示修改模型中的算子类型或参数类型来进行规避,但大部分可能需要通过在MindSpore社区[提ISSUE](https://gitee.com/mindspore/mindspore/issues) 来通知开发人员进行代码修复和适配。 + - 解决方法:由于存在不支持的GPU算子,部分报错用户可根据提示修改模型中的算子类型或参数类型来进行规避,但大部分可能需要通过在MindSpore Lite社区[提ISSUE](https://gitee.com/mindspore/mindspore-lite/issues) 来通知开发人员进行代码修复和适配。 2. Map buffer类错误 @@ -170,7 +170,7 @@ ``` - 问题分析:推理阶段为了提升性能会忽略OpenCL算子执行结束后的Event检查,而OpenCL中Enqueue类函数会默认插入Event检查,如果有OpenCL算子执行出错,会在Map阶段返回错误。 - - 解决办法:由于OpenCL算子存在BUG,建议通过在MindSpore社区[提ISSUE](https://gitee.com/mindspore/mindspore/issues) 来通知开发人员进行代码修复和适配。 + - 解决办法:由于OpenCL算子存在BUG,建议通过在MindSpore Lite社区[提ISSUE](https://gitee.com/mindspore/mindspore-lite/issues) 来通知开发人员进行代码修复和适配。 ### TensorRT GPU 推理问题 @@ -215,7 +215,7 @@ ``` - 问题分析:此报错为NPU在线构图失败。 - - 解决方法:由于构图系通过调用[HiAI DDK](https://developer.huawei.com/consumer/cn/doc/development/HiAI-Library/ddk-download-0000001053590180) 的接口完成,因此报错一般会首先出现在HiAI的错误日志中,部分报错用户可根据提示修改模型中的算子类型或参数类型来进行规避,但大部分可能需要通过在MindSpore社区[提ISSUE](https://gitee.com/mindspore/mindspore/issues) 来通知开发人员进行代码修复和适配。因此,我们下面仅给出较为常见的HiAI报错信息,以便您在社区提问时对问题有更清晰的描述,并加快问题定位的效率。 + - 解决方法:由于构图系通过调用[HiAI DDK](https://developer.huawei.com/consumer/cn/doc/development/HiAI-Library/ddk-download-0000001053590180) 的接口完成,因此报错一般会首先出现在HiAI的错误日志中,部分报错用户可根据提示修改模型中的算子类型或参数类型来进行规避,但大部分可能需要通过在MindSpore Lite社区[提ISSUE](https://gitee.com/mindspore/mindspore-lite/issues) 来通知开发人员进行代码修复和适配。因此,我们下面仅给出较为常见的HiAI报错信息,以便您在社区提问时对问题有更清晰的描述,并加快问题定位的效率。 (1)在日志中搜索“**E AI_FMK**”关键字,若在“MS_LITE”日志报错之前的位置处得到报错日志如下: @@ -276,7 +276,7 @@ ``` - 若MindSpore Lite进行整网推理存在精度问题,可以通过benchmark工具的[Dump功能](https://mindspore.cn/lite/docs/zh-CN/r2.7.0/tools/benchmark_tool.html#dump功能) 保存算子层输出,和原框架推理结果进行对比进一步定位出现精度异常的算子。 - - 针对存在精度问题的算子,可以下载[MindSpore源码](https://gitee.com/mindspore/mindspore) 检查算子实现并构造相应单算子网络进行调试与问题定位;也可以在MindSpore社区[提ISSUE](https://gitee.com/mindspore/mindspore/issues) 给MindSpore Lite的开发人员处理。 + - 针对存在精度问题的算子,可以下载[MindSpore Lite源码](https://gitee.com/mindspore/mindspore-lite) 检查算子实现并构造相应单算子网络进行调试与问题定位;也可以在MindSpore Lite社区[提ISSUE](https://gitee.com/mindspore/mindspore-lite/issues) 给MindSpore Lite的开发人员处理。 2. MindSpore Lite使用fp32推理结果正确,但是fp16推理结果出现NaN或者Inf值怎么办? - 结果出现NaN或者Inf值一般为推理过程中出现数值溢出,可以查看模型结构,筛选可能出数值溢出的算子层,然后通过benchmark工具的[Dump功能](https://mindspore.cn/lite/docs/zh-CN/r2.7.0/tools/benchmark_tool.html#dump功能) 保存算子层输出确认出现数值溢出的算子。 @@ -325,7 +325,7 @@ - 绝大多数情况下,NPU的推理性能要大幅优于CPU,但在少数情况下会比CPU更劣: (1)检查模型中是否存在大量Pad或StridedSlice等算子,由于NPU中的数组格式与CPU有所不同,这类算子在NPU中运算时涉及数组的重排,因此相较CPU不存在任何优势,甚至劣于CPU。若确实需要在NPU上运行,建议尝试去除或替换此类算子。 - (2)通过工具(如adb logcat)抓取后台日志,搜索所有“**BuildIRModel build successfully**”关键字,发现相关日志出现了多次,说明模型在线构图时切分为了多张NPU子图,子图的切分一般都是由图中存在Transpose或/和当前不支持的NPU算子引起。目前我们支持最多20张子图的切分,子图数量越多,NPU的整体耗时增加越明显。建议比对MindSpore Lite当前支持的NPU[算子列表](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/reference/operator_list_lite.html),在模型搭建时规避不支持的算子,或在MindSpore社区[提ISSUE](https://gitee.com/mindspore/mindspore/issues) 询问MindSpore Lite的开发人员。 + (2)通过工具(如adb logcat)抓取后台日志,搜索所有“**BuildIRModel build successfully**”关键字,发现相关日志出现了多次,说明模型在线构图时切分为了多张NPU子图,子图的切分一般都是由图中存在Transpose或/和当前不支持的NPU算子引起。目前我们支持最多20张子图的切分,子图数量越多,NPU的整体耗时增加越明显。建议比对MindSpore Lite当前支持的NPU[算子列表](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/reference/operator_list_lite.html),在模型搭建时规避不支持的算子,或在MindSpore Lite社区[提ISSUE](https://gitee.com/mindspore/mindspore-lite/issues) 询问MindSpore Lite的开发人员。 ## 使用Visual Studio相关问题 @@ -416,19 +416,19 @@ A:MindSpore Lite内置内存池有最大容量限制,为3GB,如果模型 **Q:MindSpore Lite的离线模型MS文件如何进行可视化,看到网络结构?** -A:模型可视化开源仓库`Netron`已经支持查看MindSpore Lite模型(MindSpore版本 >= r1.2),请到Netron官网下载安装包[Netron](https://github.com/lutzroeder/netron)。 +A:模型可视化开源仓库`Netron`已经支持查看MindSpore Lite模型(MindSpore Lite版本 >= r1.2),请到Netron官网下载安装包[Netron](https://github.com/lutzroeder/netron)。
-**Q:MindSpore有量化推理工具么?** +**Q:MindSpore Lite有量化推理工具么?** A:[MindSpore Lite](https://www.mindspore.cn/lite)支持云侧量化感知训练的量化模型的推理,MindSpore Lite converter工具提供训练后量化以及权重量化功能,且功能在持续加强完善中。
-**Q:MindSpore有轻量的端侧推理引擎么?** +**Q:MindSpore Lite有轻量的端侧推理引擎么?** -A:MindSpore轻量化推理框架MindSpore Lite已于r0.7版本正式上线,欢迎试用并提出宝贵意见,概述、教程和文档等请参考[MindSpore Lite](https://www.mindspore.cn/lite) +A:MindSpore Lite轻量化推理框架MindSpore Lite已于r0.7版本正式上线,欢迎试用并提出宝贵意见,概述、教程和文档等请参考[MindSpore Lite](https://www.mindspore.cn/lite)
diff --git a/docs/lite/docs/source_zh_cn/tools/benchmark_tool.md b/docs/lite/docs/source_zh_cn/tools/benchmark_tool.md index 1989a63b90..2eae5e8740 100644 --- a/docs/lite/docs/source_zh_cn/tools/benchmark_tool.md +++ b/docs/lite/docs/source_zh_cn/tools/benchmark_tool.md @@ -12,7 +12,7 @@ 使用Benchmark工具,需要进行如下环境准备工作。 -- 编译:Benchmark工具代码在MindSpore源码的`mindspore-lite/tools/benchmark`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#环境要求)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#编译示例)执行编译。 +- 编译:Benchmark工具代码在MindSpore Lite源码的`mindspore-lite/tools/benchmark`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#环境要求)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#编译示例)执行编译。 - 运行:参考构建文档中的[编译输出](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#编译选项),获得`benchmark`工具。 @@ -300,7 +300,7 @@ np.fromfile("/path/to/dump.bin", np.float32) 使用Benchmark工具,需要进行如下环境准备工作。 -- 编译:Benchmark工具代码在MindSpore源码的`mindspore-lite/tools/benchmark`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#环境要求-1)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#编译示例-1)执行编译。 +- 编译:Benchmark工具代码在MindSpore Lite源码的`mindspore-lite/tools/benchmark`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#环境要求-1)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#编译示例-1)执行编译。 - 将推理需要的动态链接库加入环境变量PATH。 ```bash diff --git a/docs/lite/docs/source_zh_cn/tools/benchmark_train_tool.md b/docs/lite/docs/source_zh_cn/tools/benchmark_train_tool.md index 3c23bfc10f..595302b7a5 100644 --- a/docs/lite/docs/source_zh_cn/tools/benchmark_train_tool.md +++ b/docs/lite/docs/source_zh_cn/tools/benchmark_train_tool.md @@ -4,7 +4,7 @@ ## 概述 -与`benchmark`工具类似,MindSpore端侧训练为你提供了`benchmark_train`工具对训练后的模型进行基准测试。它不仅可以对模型前向推理执行耗时进行定量分析(性能),还可以通过指定模型输出进行可对比的误差分析(精度)。 +与`benchmark`工具类似,MindSpore Lite端侧训练为你提供了`benchmark_train`工具对训练后的模型进行基准测试。它不仅可以对模型前向推理执行耗时进行定量分析(性能),还可以通过指定模型输出进行可对比的误差分析(精度)。 ## Linux环境使用说明 @@ -12,7 +12,7 @@ 使用`benchmark_train`工具,需要进行如下环境准备工作。 -- 编译:`benchmark_train`工具代码在MindSpore源码的`mindspore-lite/tools/benchmark_train`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#环境要求)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#模块构建编译选项)编译端侧训练框架。 +- 编译:`benchmark_train`工具代码在MindSpore Lite源码的`mindspore-lite/tools/benchmark_train`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#环境要求)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#模块构建编译选项)编译端侧训练框架。 - 配置环境变量:参考构建文档中的[编译输出](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#目录结构),获得`benchmark_train`工具,并配置环境变量。假设您编译出的端侧训练框架压缩包所在完整路径为`/path/mindspore-lite-{version}-{os}-{arch}.tar.gz`,解压并配置环境变量的命令如下: diff --git a/docs/lite/docs/source_zh_cn/tools/cropper_tool.md b/docs/lite/docs/source_zh_cn/tools/cropper_tool.md index a4ee80ae06..fb44abccc8 100644 --- a/docs/lite/docs/source_zh_cn/tools/cropper_tool.md +++ b/docs/lite/docs/source_zh_cn/tools/cropper_tool.md @@ -12,7 +12,7 @@ MindSpore Lite提供对Runtime的`libmindspore-lite.a`静态库裁剪工具, 使用MindSpore Lite裁剪工具,需要进行如下环境准备工作。 -- 编译:裁剪工具代码在MindSpore源码的`mindspore-lite/tools/cropper`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#环境要求)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#编译示例)编译x86_64版本。 +- 编译:裁剪工具代码在MindSpore Lite源码的`mindspore-lite/tools/cropper`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#环境要求)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#编译示例)编译x86_64版本。 - 运行:参考构建文档中的[编译输出](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html#目录结构),获得`cropper`工具。 diff --git a/docs/lite/docs/source_zh_cn/tools/obfuscator_tool.md b/docs/lite/docs/source_zh_cn/tools/obfuscator_tool.md index 16bc1f14de..9e454a47b8 100644 --- a/docs/lite/docs/source_zh_cn/tools/obfuscator_tool.md +++ b/docs/lite/docs/source_zh_cn/tools/obfuscator_tool.md @@ -4,7 +4,7 @@ ## 概述 -MindSpore Lite提供一个轻量级的离线模型混淆工具,可用于保护IOT或端侧设备上部署的模型文件的机密性。该工具通过对`ms`模型的网络结构和算子类型进行混淆,使得混淆后模型的计算逻辑变得难以理解。通过混淆工具生成的模型仍然是`ms`格式的,可直接通过Runtime推理框架执行推理(编译时需开启`mindspore/mindspore-lite/CMakeLists.txt`中的`MSLITE_ENABLE_MODEL_OBF`选项)。混淆会导致模型加载时延有轻微的增加,但对推理性能没有影响。 +MindSpore Lite提供一个轻量级的离线模型混淆工具,可用于保护IOT或端侧设备上部署的模型文件的机密性。该工具通过对`ms`模型的网络结构和算子类型进行混淆,使得混淆后模型的计算逻辑变得难以理解。通过混淆工具生成的模型仍然是`ms`格式的,可直接通过Runtime推理框架执行推理(编译时需开启`mindspore-lite/mindspore-lite/CMakeLists.txt`中的`MSLITE_ENABLE_MODEL_OBF`选项)。混淆会导致模型加载时延有轻微的增加,但对推理性能没有影响。 ## Linux环境使用说明 diff --git a/docs/lite/docs/source_zh_cn/tools/visual_tool.md b/docs/lite/docs/source_zh_cn/tools/visual_tool.md index 1863a15d2f..3c6d934e39 100644 --- a/docs/lite/docs/source_zh_cn/tools/visual_tool.md +++ b/docs/lite/docs/source_zh_cn/tools/visual_tool.md @@ -10,7 +10,7 @@ ## 功能列表 -- 支持加载`.ms`模型,要求MindSpore版本>=1.2.0; +- 支持加载`.ms`模型,要求MindSpore Lite版本>=1.2.0; - 支持查看子图; - 支持拓扑结构和数据流`shape`的展示; - 支持查看模型的`format`、`input`和`output`等; diff --git a/docs/lite/docs/source_zh_cn/train/converter_train.md b/docs/lite/docs/source_zh_cn/train/converter_train.md index 17e38e7389..e5342378ba 100644 --- a/docs/lite/docs/source_zh_cn/train/converter_train.md +++ b/docs/lite/docs/source_zh_cn/train/converter_train.md @@ -4,10 +4,10 @@ ## 概述 -创建MindSpore端侧模型的步骤: +创建MindSpore Lite端侧模型的步骤: - 首先基于MindSpore架构使用Python创建网络模型,并导出为`.mindir`文件,参见云端的[保存模型](https://www.mindspore.cn/tutorials/zh-CN/r2.7.0/beginner/save_load.html#保存和加载mindir)。 -- 然后将`.mindir`模型文件转换成`.ms`文件,`.ms`文件可以导入端侧设备并基于MindSpore端侧框架训练。 +- 然后将`.mindir`模型文件转换成`.ms`文件,`.ms`文件可以导入端侧设备并基于MindSpore Lite端侧框架训练。 ## Linux环境 @@ -55,7 +55,7 @@ MindSpore Lite 模型转换工具提供了多个参数,目前工具仅支持Li CONVERT RESULT SUCCESS:0 ``` -这表明 MindSpore 模型成功转换为 MindSpore 端侧模型,并生成了新文件`my_model.ms`。如果转换失败输出如下: +这表明 MindSpore 模型成功转换为 MindSpore Lite端侧模型,并生成了新文件`my_model.ms`。如果转换失败输出如下: ```text CONVERT RESULT FAILED: diff --git a/docs/lite/docs/source_zh_cn/train/runtime_train_cpp.md b/docs/lite/docs/source_zh_cn/train/runtime_train_cpp.md index 3ad72a86d5..40d6c99280 100644 --- a/docs/lite/docs/source_zh_cn/train/runtime_train_cpp.md +++ b/docs/lite/docs/source_zh_cn/train/runtime_train_cpp.md @@ -24,7 +24,7 @@ MindSpore Lite训练框架中的[Model](https://www.mindspore.cn/lite/api/zh-CN/ ### 读取模型 -模型文件是一个flatbuffer序列化文件,它通过MindSpore模型转换工具得到,其文件扩展名为`.ms`。在模型训练或推理之前,模型需要从文件系统中加载。相关操作主要在[Serialization](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#serialization)类中实现,该类实现了模型文件读写的方法。 +模型文件是一个flatbuffer序列化文件,它通过MindSpore Lite模型转换工具得到,其文件扩展名为`.ms`。在模型训练或推理之前,模型需要从文件系统中加载。相关操作主要在[Serialization](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#serialization)类中实现,该类实现了模型文件读写的方法。 ### 创建上下文 @@ -112,7 +112,7 @@ int DataSetPipeline() { ## 执行训练 -MindSpore为用户提供了现有的回调类:`AccuracyMetrics`、`CkptSaver`、`TrainAccuracy`、`LossMonitor`和`Metrics`。`Model`类的`Train`和`Evaluate`函数分别将模型设置为训练和验证模式,指定数据预处理方法并监测会话状态。 +MindSpore Lite为用户提供了现有的回调类:`AccuracyMetrics`、`CkptSaver`、`TrainAccuracy`、`LossMonitor`和`Metrics`。`Model`类的`Train`和`Evaluate`函数分别将模型设置为训练和验证模式,指定数据预处理方法并监测会话状态。 ### 训练 @@ -473,7 +473,7 @@ if (ret != RET_OK) { ### 保存模型 -MindSpore的`Serialization`类实际调用的是`ExportModel`函数,`ExportModel`原型如下: +MindSpore Lite的`Serialization`类实际调用的是`ExportModel`函数,`ExportModel`原型如下: ```cpp static Status ExportModel(const Model &model, ModelType model_type, const std::string &model_file, diff --git a/docs/lite/docs/source_zh_cn/train/runtime_train_java.md b/docs/lite/docs/source_zh_cn/train/runtime_train_java.md index 302b9c1d4c..8b6923b019 100644 --- a/docs/lite/docs/source_zh_cn/train/runtime_train_java.md +++ b/docs/lite/docs/source_zh_cn/train/runtime_train_java.md @@ -24,7 +24,7 @@ MindSpore Lite训练框架中的[Model](https://www.mindspore.cn/lite/api/zh-CN/ ### 读取模型 -模型文件是一个flatbuffer序列化文件,它通过MindSpore模型转换工具得到,其文件扩展名为`.ms`。在模型训练或推理之前,模型需要从文件系统中加载。相关操作主要在[Graph](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/graph.html#graph)类中实现,该类实现了模型文件读写的方法。 +模型文件是一个flatbuffer序列化文件,它通过MindSpore Lite模型转换工具得到,其文件扩展名为`.ms`。在模型训练或推理之前,模型需要从文件系统中加载。相关操作主要在[Graph](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/graph.html#graph)类中实现,该类实现了模型文件读写的方法。 ### 创建上下文 @@ -177,7 +177,7 @@ bool ret = model.resize(inputs, dims); 在图执行之前,无论执行训练或推理,输入数据必须载入模型的输入张量。MindSpore Lite提供了以下函数来获取模型的输入张量: -1. 使用[getInputsByTensorName](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/model.html#getinputsbytensorname)方法,获取连接到基于张量名称的模型输入节点模型输入张量。 +1. 使用[getInputsByTensorName](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/model.html#getinputbytensorname)方法,获取连接到基于张量名称的模型输入节点模型输入张量。 ```java /** diff --git a/docs/lite/docs/source_zh_cn/train/train_lenet.md b/docs/lite/docs/source_zh_cn/train/train_lenet.md index 8c2aa2e44b..683aa3f4d3 100644 --- a/docs/lite/docs/source_zh_cn/train/train_lenet.md +++ b/docs/lite/docs/source_zh_cn/train/train_lenet.md @@ -2,7 +2,7 @@ [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_zh_cn/train/train_lenet.md) -> MindSpore已经统一端边云推理API,如您想继续使用MindSpore Lite独立API进行端侧训练,可以参考[此文档](https://www.mindspore.cn/lite/docs/zh-CN/r1.3/quick_start/train_lenet.html)。 +> MindSpore Lite 已经统一端边云推理API,如您想继续使用MindSpore Lite独立API进行端侧训练,可以参考[此文档](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/train/train_lenet.html)。 ## 概述 @@ -64,13 +64,13 @@ 通过`git`克隆源码,进入源码目录,`Linux`指令如下: ```bash -git clone https://gitee.com/mindspore/mindspore.git -b {version} -cd ./mindspore +git clone https://gitee.com/mindspore/mindspore-lite.git -b {version} +cd ./mindspore-lite ``` 源码路径下的`mindspore-lite/examples/train_lenet_cpp`目录包含了本示例程序的源码。其中version和下文中[MindSpore Lite下载页面](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/use/downloads.html)的version保持一致。如果-b 指定master,需要通过[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/build/build.html)的方式获取对应的安装包。 -请到[MindSpore Lite下载页面](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/use/downloads.html)下载mindspore-lite-{version}-linux-x64.tar.gz以及mindspore-lite-{version}-android-aarch64.tar.gz。其中,mindspore-lite-{version}-linux-x64.tar.gz是MindSpore Lite在x86平台的安装包,里面包含模型转换工具converter_lite,本示例用它来将MINDIR模型转换成MindSpore Lite支持的`.ms`格式;mindspore-lite-{version}-android-aarch64.tar.gz是MindSpore Lite在Android平台的安装包,里面包含训练运行时库libmindspore-lite.so,本示例用它所提供的接口在Android上训练模型。最后将文件放到MindSpore源码下的`output`目录(如果没有`output`目录,请创建它)。 +请到[MindSpore Lite下载页面](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/use/downloads.html)下载mindspore-lite-{version}-linux-x64.tar.gz以及mindspore-lite-{version}-android-aarch64.tar.gz。其中,mindspore-lite-{version}-linux-x64.tar.gz是MindSpore Lite在x86平台的安装包,里面包含模型转换工具converter_lite,本示例用它来将MINDIR模型转换成MindSpore Lite支持的`.ms`格式;mindspore-lite-{version}-android-aarch64.tar.gz是MindSpore Lite在Android平台的安装包,里面包含训练运行时库libmindspore-lite.so,本示例用它所提供的接口在Android上训练模型。最后将文件放到MindSpore Lite源码下的`output`目录(如果没有`output`目录,请创建它)。 假设下载的安装包存放在`/Downloads`目录,上述操作对应的`Linux`指令如下: diff --git a/docs/lite/docs/source_zh_cn/train/train_lenet_java.md b/docs/lite/docs/source_zh_cn/train/train_lenet_java.md index 08c3ff5335..1b9c4c1f0f 100644 --- a/docs/lite/docs/source_zh_cn/train/train_lenet_java.md +++ b/docs/lite/docs/source_zh_cn/train/train_lenet_java.md @@ -20,13 +20,13 @@ - [OpenJDK](https://openjdk.java.net/install/) 1.8 到 1.15 -### 下载MindSpore并编译端侧训练Java包 +### 下载MindSpore Lite并编译端侧训练Java包 首先克隆源码,然后编译MindSpore Lite端侧训练Java包,`Linux`指令如下: ```bash -git clone -b v2.7.0 https://gitee.com/mindspore/mindspore.git -cd mindspore +git clone -b v2.7.0 https://gitee.com/mindspore/mindspore-lite.git +cd mindspore-lite bash build.sh -I x86_64 -j8 ``` @@ -60,7 +60,7 @@ MNIST_Data/ 1. 首先进入示例工程所在目录,运行示例程序,命令如下: ```bash - cd /codes/mindspore/mindspore-lite/examples/train_lenet_java + cd /codes/mindspore-lite/mindspore-lite/examples/train_lenet_java ./prepare_and_run.sh -D /PATH/MNIST_Data/ -r ../../../../output/mindspore-lite-${version}-linux-x64.tar.gz ``` -- Gitee