diff --git a/docs/lite/api/_custom/sphinx_builder_html b/docs/lite/api/_custom/sphinx_builder_html index 453a52fea86bbe95cd49a2090bc25b2db53c3901..f6679d50b884b4aa2ec1771a62c5a25c27da3424 100644 --- a/docs/lite/api/_custom/sphinx_builder_html +++ b/docs/lite/api/_custom/sphinx_builder_html @@ -1116,7 +1116,7 @@ class StandaloneHTMLBuilder(Builder): # Add links to the Python operator interface. if "mindspore.ops." in output: - output = re.sub(r'(mindspore\.ops\.\w+) ', r'\1 ', output, count=0) + output = re.sub(r'(mindspore\.ops\.\w+) ', r'\1 ', output, count=0) except UnicodeError: logger.warning(__("a Unicode error occurred when rendering the page %s. " diff --git a/docs/lite/api/source_en/api_c/lite_c_example.rst b/docs/lite/api/source_en/api_c/lite_c_example.rst index c4588f1ec970840d314f8e45510ff1786f997a4d..c35a97a3ffddce6c4f2168c58374f28f1eaff841 100644 --- a/docs/lite/api/source_en/api_c/lite_c_example.rst +++ b/docs/lite/api/source_en/api_c/lite_c_example.rst @@ -4,4 +4,4 @@ Example .. toctree:: :maxdepth: 1 - Simple Demo↗ + Simple Demo↗ diff --git a/docs/lite/api/source_en/api_cpp/lite_cpp_example.rst b/docs/lite/api/source_en/api_cpp/lite_cpp_example.rst index 41711025f8bb1b2b6ac69fe714c5f4a3c7612e20..be1c0c9d812ae05eaa91707d3c4f6d35183de3d3 100644 --- a/docs/lite/api/source_en/api_cpp/lite_cpp_example.rst +++ b/docs/lite/api/source_en/api_cpp/lite_cpp_example.rst @@ -4,6 +4,6 @@ Example .. toctree:: :maxdepth: 1 - Simple Demo↗ - Android Application Development Based on JNI Interface↗ - High-level Usage↗ \ No newline at end of file + Simple Demo↗ + Android Application Development Based on JNI Interface↗ + High-level Usage↗ \ No newline at end of file diff --git a/docs/lite/api/source_en/api_java/ascend_device_info.md b/docs/lite/api/source_en/api_java/ascend_device_info.md index d103b925a1b642797c78d11ec2fa8da2f375abd5..19018c0d3b795dcd87d8547ed988a1f67dd3d165 100644 --- a/docs/lite/api/source_en/api_java/ascend_device_info.md +++ b/docs/lite/api/source_en/api_java/ascend_device_info.md @@ -1,6 +1,6 @@ # AscendDeviceInfo -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_en/api_java/ascend_device_info.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_en/api_java/ascend_device_info.md) ```java import com.mindspore.config.AscendDeviceInfo; diff --git a/docs/lite/api/source_en/api_java/class_list.md b/docs/lite/api/source_en/api_java/class_list.md index 03d1c1b3a48342ddb2f0b120315fc21b2afda6b3..4c3912610200c881df9db83d7a56847f636d4f72 100644 --- a/docs/lite/api/source_en/api_java/class_list.md +++ b/docs/lite/api/source_en/api_java/class_list.md @@ -1,20 +1,20 @@ # Class List -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_en/api_java/class_list.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_en/api_java/class_list.md) | Package | Class Name | Description | Supported At Cloud-side Inference | Supported At Device-side Inference | | ------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |--------|--------| -| com.mindspore | [Model](https://www.mindspore.cn/lite/api/en/master/api_java/model.html) | Model defines model in MindSpore for compiling and running compute graph. | √ | √ | -| com.mindspore.config | [MSContext](https://www.mindspore.cn/lite/api/en/master/api_java/mscontext.html) | MSContext is used to save the context during execution. | √ | √ | -| com.mindspore | [MSTensor](https://www.mindspore.cn/lite/api/en/master/api_java/mstensor.html) | MSTensor defines the tensor in MindSpore. | √ | √ | -| com.mindspore | [ModelParallelRunner](https://www.mindspore.cn/lite/api/en/master/api_java/model_parallel_runner.html) | Defines MindSpore Lite concurrent inference. | √ | ✕ | -| com.mindspore.config | [RunnerConfig](https://www.mindspore.cn/lite/api/en/master/api_java/runner_config.html) | RunnerConfig defines configuration parameters for concurrent inference. | √ | ✕ | -| com.mindspore | [Graph](https://www.mindspore.cn/lite/api/en/master/api_java/graph.html) | Graph defines the compute graph in MindSpore. | ✕ | √ | -| com.mindspore.config | [CpuBindMode](https://www.mindspore.cn/lite/api/en/master/api_java/mscontext.html#cpubindmode) | CpuBindMode defines the CPU binding mode. | √ | √ | -| com.mindspore.config | [DeviceType](https://www.mindspore.cn/lite/api/en/master/api_java/mscontext.html#devicetype) | DeviceType defines the back-end device type. | √ | √ | -| com.mindspore.config | [DataType](https://www.mindspore.cn/lite/api/en/master/api_java/mstensor.html#datatype) | DataType defines the supported data types. | √ | √ | -| com.mindspore.config | [Version](https://www.mindspore.cn/lite/api/en/master/api_java/version.html) | Version is used to obtain the version information of MindSpore. | ✕ | √ | -| com.mindspore.config | [ModelType](https://www.mindspore.cn/lite/api/en/master/api_java/model.html#modeltype) | ModelType defines the model file type. | √ | √ | -| com.mindspore.config | [AscendDeviceInfo](https://www.mindspore.cn/lite/api/en/master/api_java/ascend_device_info.html) | The AscendDeviceInfo class is used to configure MindSpore Lite Ascend device options. | √ | ✕ | -| com.mindspore.config | [TrainCfg](https://www.mindspore.cn/lite/api/en/master/api_java/train_cfg.html) | Configuration parameters used for model training on the device. | ✕ | √ | +| com.mindspore | [Model](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/model.html) | Model defines model in MindSpore for compiling and running compute graph. | √ | √ | +| com.mindspore.config | [MSContext](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/mscontext.html) | MSContext is used to save the context during execution. | √ | √ | +| com.mindspore | [MSTensor](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/mstensor.html) | MSTensor defines the tensor in MindSpore. | √ | √ | +| com.mindspore | [ModelParallelRunner](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/model_parallel_runner.html) | Defines MindSpore Lite concurrent inference. | √ | ✕ | +| com.mindspore.config | [RunnerConfig](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/runner_config.html) | RunnerConfig defines configuration parameters for concurrent inference. | √ | ✕ | +| com.mindspore | [Graph](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/graph.html) | Graph defines the compute graph in MindSpore. | ✕ | √ | +| com.mindspore.config | [CpuBindMode](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/mscontext.html#cpubindmode) | CpuBindMode defines the CPU binding mode. | √ | √ | +| com.mindspore.config | [DeviceType](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/mscontext.html#devicetype) | DeviceType defines the back-end device type. | √ | √ | +| com.mindspore.config | [DataType](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/mstensor.html#datatype) | DataType defines the supported data types. | √ | √ | +| com.mindspore.config | [Version](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/version.html) | Version is used to obtain the version information of MindSpore. | ✕ | √ | +| com.mindspore.config | [ModelType](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/model.html#modeltype) | ModelType defines the model file type. | √ | √ | +| com.mindspore.config | [AscendDeviceInfo](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/ascend_device_info.html) | The AscendDeviceInfo class is used to configure MindSpore Lite Ascend device options. | √ | ✕ | +| com.mindspore.config | [TrainCfg](https://www.mindspore.cn/lite/api/en/r2.7.0/api_java/train_cfg.html) | Configuration parameters used for model training on the device. | ✕ | √ | diff --git a/docs/lite/api/source_en/api_java/graph.md b/docs/lite/api/source_en/api_java/graph.md index b5a7b2b8bd5f5c7503f247651d22d3cedab63023..ff1f5a1a664d6bb6f5681aff8e28dc32e039dee4 100644 --- a/docs/lite/api/source_en/api_java/graph.md +++ b/docs/lite/api/source_en/api_java/graph.md @@ -1,6 +1,6 @@ # Graph -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_en/api_java/graph.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_en/api_java/graph.md) ```java import com.mindspore.Graph; diff --git a/docs/lite/api/source_en/api_java/lite_java_example.rst b/docs/lite/api/source_en/api_java/lite_java_example.rst index 01f76f0495b7394007f45abde2213d365095ba6e..38e9ab333fc71684088aefad6a6717817d26d8e5 100644 --- a/docs/lite/api/source_en/api_java/lite_java_example.rst +++ b/docs/lite/api/source_en/api_java/lite_java_example.rst @@ -4,6 +4,6 @@ Example .. toctree:: :maxdepth: 1 - Simple Demo↗ - Android Application Development Based on Java Interface↗ - High-level Usage↗ \ No newline at end of file + Simple Demo↗ + Android Application Development Based on Java Interface↗ + High-level Usage↗ \ No newline at end of file diff --git a/docs/lite/api/source_en/api_java/model.md b/docs/lite/api/source_en/api_java/model.md index 8a170203a5479356d44fc770c6ddc0adf10bb38a..e6b6613ca8f198caa139127e70cc71afa3b88859 100644 --- a/docs/lite/api/source_en/api_java/model.md +++ b/docs/lite/api/source_en/api_java/model.md @@ -1,6 +1,6 @@ # Model -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_en/api_java/model.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_en/api_java/model.md) ```java import com.mindspore.model; diff --git a/docs/lite/api/source_en/api_java/model_parallel_runner.md b/docs/lite/api/source_en/api_java/model_parallel_runner.md index 1525eaf8bf9f70b5e0c30e9a3cfcf12a53d22987..05f29c8ee8006a0713c4a64889c5338ea1c9b9e6 100644 --- a/docs/lite/api/source_en/api_java/model_parallel_runner.md +++ b/docs/lite/api/source_en/api_java/model_parallel_runner.md @@ -1,6 +1,6 @@ # ModelParallelRunner -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_en/api_java/model_parallel_runner.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_en/api_java/model_parallel_runner.md) ```java import com.mindspore.config.RunnerConfig; diff --git a/docs/lite/api/source_en/api_java/mscontext.md b/docs/lite/api/source_en/api_java/mscontext.md index 1f0b87b9035c02a5e55f825106c3e17ab8e66377..b5a328b42b77eaa3f761d310628a62c4c5dbfcc4 100644 --- a/docs/lite/api/source_en/api_java/mscontext.md +++ b/docs/lite/api/source_en/api_java/mscontext.md @@ -1,6 +1,6 @@ # MSContext -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_en/api_java/mscontext.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_en/api_java/mscontext.md) ```java import com.mindspore.config.MSContext; @@ -54,7 +54,7 @@ Initialize MSContext for cpu. - Parameters - `threadNum`: Thread number config for thread pool. - - `cpuBindMode`: A **[CpuBindMode](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/java/src/main/java/com/mindspore/config/CpuBindMode.java)** **enum** variable. + - `cpuBindMode`: A **[CpuBindMode](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/java/src/main/java/com/mindspore/config/CpuBindMode.java)** **enum** variable. - Returns @@ -69,7 +69,7 @@ Initialize MSContext. - Parameters - `threadNum`: Thread number config for thread pool. - - `cpuBindMode`: A **[CpuBindMode](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/java/src/main/java/com/mindspore/config/CpuBindMode.java)** **enum** variable. + - `cpuBindMode`: A **[CpuBindMode](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/java/src/main/java/com/mindspore/config/CpuBindMode.java)** **enum** variable. - `isEnableParallel`: Is enable parallel in different device. - Returns @@ -86,7 +86,7 @@ Add device info for mscontext. - Parameters - - `deviceType`: A **[DeviceType](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/java/src/main/java/com/mindspore/config/DeviceType.java)** **enum** type. + - `deviceType`: A **[DeviceType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/java/src/main/java/com/mindspore/config/DeviceType.java)** **enum** type. - `isEnableFloat16`: Is enable fp16. - Returns @@ -101,7 +101,7 @@ Add device info for mscontext. - Parameters - - `deviceType`: A **[DeviceType](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/java/src/main/java/com/mindspore/config/DeviceType.java)** **enum** type. + - `deviceType`: A **[DeviceType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/java/src/main/java/com/mindspore/config/DeviceType.java)** **enum** type. - `isEnableFloat16`: is enable fp16. - `npuFreq`: Npu frequency. diff --git a/docs/lite/api/source_en/api_java/mstensor.md b/docs/lite/api/source_en/api_java/mstensor.md index bd24f364d4387f69efd74b003d36930ad1ef8bf8..84f4e3455fe18280289b48a5786a2c500600ca66 100644 --- a/docs/lite/api/source_en/api_java/mstensor.md +++ b/docs/lite/api/source_en/api_java/mstensor.md @@ -1,6 +1,6 @@ # MSTensor -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_en/api_java/mstensor.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_en/api_java/mstensor.md) ```java import com.mindspore.MSTensor; @@ -86,7 +86,7 @@ Get the shape of the MindSpore MSTensor. public int getDataType() ``` -DataType is defined in [com.mindspore.DataType](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/java/src/main/java/com/mindspore/config/DataType.java). +DataType is defined in [com.mindspore.DataType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/java/src/main/java/com/mindspore/config/DataType.java). - Returns diff --git a/docs/lite/api/source_en/api_java/runner_config.md b/docs/lite/api/source_en/api_java/runner_config.md index 052ac6dc933e6b37245370e5c94b270b79a96d6d..0822b7f7f04aabb5e88d6ada4b759f53516f4829 100644 --- a/docs/lite/api/source_en/api_java/runner_config.md +++ b/docs/lite/api/source_en/api_java/runner_config.md @@ -1,6 +1,6 @@ # RunnerConfig -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_en/api_java/runner_config.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_en/api_java/runner_config.md) RunnerConfig defines the configuration parameters of MindSpore Lite concurrent inference. diff --git a/docs/lite/api/source_en/api_java/train_cfg.md b/docs/lite/api/source_en/api_java/train_cfg.md index 2b044eb479975fb39facb91b6e63ec1106c5096d..4f6f9d8b9fed5e1e6939dca079c4c772171b2754 100644 --- a/docs/lite/api/source_en/api_java/train_cfg.md +++ b/docs/lite/api/source_en/api_java/train_cfg.md @@ -1,6 +1,6 @@ # TrainCfg -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_en/api_java/train_cfg.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_en/api_java/train_cfg.md) ```java import com.mindspore.config.TrainCfg; diff --git a/docs/lite/api/source_en/api_java/version.md b/docs/lite/api/source_en/api_java/version.md index 99903c00f88851d726b3c6e050568927be42da82..d09235b27f06ab6d106f5bad462a5bd5457638c0 100644 --- a/docs/lite/api/source_en/api_java/version.md +++ b/docs/lite/api/source_en/api_java/version.md @@ -1,6 +1,6 @@ # Version -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_en/api_java/version.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_en/api_java/version.md) ```java import com.mindspore.config.Version; diff --git a/docs/lite/api/source_en/index.rst b/docs/lite/api/source_en/index.rst index 5c486f06cc88dd6c4058a7f792c5077af55b121f..065a29fc7f81480580240547861a7aeb658ee28a 100644 --- a/docs/lite/api/source_en/index.rst +++ b/docs/lite/api/source_en/index.rst @@ -12,21 +12,21 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Class | Description | C++ API | Python API | +=========================================================+===================================================================================================================================+==========================================================================================================================================================================================================================+============================================================================================================================================================================================================================================================================================================================================================================+ -| Context | Set the number of threads at runtime | void SetThreadNum(int32_t thread_num) | `Context.cpu.thread_num `__ | +| Context | Set the number of threads at runtime | void SetThreadNum(int32_t thread_num) | `Context.cpu.thread_num `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | Get the current thread number setting | int32_t GetThreadNum() const | `Context.cpu.thread_num `__ | +| Context | Get the current thread number setting | int32_t GetThreadNum() const | `Context.cpu.thread_num `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | Set the parallel number of operators at runtime | void SetInterOpParallelNum(int32_t parallel_num) | `Context.cpu.inter_op_parallel_num `__ | +| Context | Set the parallel number of operators at runtime | void SetInterOpParallelNum(int32_t parallel_num) | `Context.cpu.inter_op_parallel_num `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | Get the current operators parallel number setting | int32_t GetInterOpParallelNum() const | `Context.cpu.inter_op_parallel_num `__ | +| Context | Get the current operators parallel number setting | int32_t GetInterOpParallelNum() const | `Context.cpu.inter_op_parallel_num `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | Set the thread affinity to CPU cores | void SetThreadAffinity(int mode) | `Context.cpu.thread_affinity_mode `__ | +| Context | Set the thread affinity to CPU cores | void SetThreadAffinity(int mode) | `Context.cpu.thread_affinity_mode `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | Get the thread affinity of CPU cores | int GetThreadAffinityMode() const | `Context.cpu.thread_affinity_mode `__ | +| Context | Get the thread affinity of CPU cores | int GetThreadAffinityMode() const | `Context.cpu.thread_affinity_mode `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | Set the thread lists to CPU cores | void SetThreadAffinity(const std::vector &core_list) | `Context.cpu.thread_affinity_core_list `__ | +| Context | Set the thread lists to CPU cores | void SetThreadAffinity(const std::vector &core_list) | `Context.cpu.thread_affinity_core_list `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | Get the thread lists of CPU cores | std::vector GetThreadAffinityCoreList() const | `Context.cpu.thread_affinity_core_list `__ | +| Context | Get the thread lists of CPU cores | std::vector GetThreadAffinityCoreList() const | `Context.cpu.thread_affinity_core_list `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | Set the status whether to perform model inference or training in parallel | void SetEnableParallel(bool is_parallel) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -44,7 +44,7 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | Get the mode of the model run | bool GetMultiModalHW() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | Get a mutable reference of DeviceInfoContext vector in this context | std::vector> &MutableDeviceInfo() | Wrapped in `Context.target `__ | +| Context | Get a mutable reference of DeviceInfoContext vector in this context | std::vector> &MutableDeviceInfo() | Wrapped in `Context.target `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | DeviceInfoContext | Get the type of this DeviceInfoContext | enum DeviceType GetDeviceType() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -62,29 +62,29 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | DeviceInfoContext | obtain memory allocator | std::shared_ptr GetAllocator() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| CPUDeviceInfo | Get the type of this DeviceInfoContext | enum DeviceType GetDeviceType() const | `context.cpu `__ | +| CPUDeviceInfo | Get the type of this DeviceInfoContext | enum DeviceType GetDeviceType() const | `context.cpu `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| CPUDeviceInfo | Set enables to perform the float16 inference | void SetEnableFP16(bool is_fp16) | `Context.cpu.precision_mode `__ | +| CPUDeviceInfo | Set enables to perform the float16 inference | void SetEnableFP16(bool is_fp16) | `Context.cpu.precision_mode `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| CPUDeviceInfo | Get enables to perform the float16 inference | bool GetEnableFP16() const | `Context.cpu.precision_mode `__ | +| CPUDeviceInfo | Get enables to perform the float16 inference | bool GetEnableFP16() const | `Context.cpu.precision_mode `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | Get the type of this DeviceInfoContext | enum DeviceType GetDeviceType() const | `Context.gpu `__ | +| GPUDeviceInfo | Get the type of this DeviceInfoContext | enum DeviceType GetDeviceType() const | `Context.gpu `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | Set device id | void SetDeviceID(uint32_t device_id) | `Context.gpu.device_id `__ | +| GPUDeviceInfo | Set device id | void SetDeviceID(uint32_t device_id) | `Context.gpu.device_id `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | Get the device id | uint32_t GetDeviceID() const | `Context.gpu.device_id `__ | +| GPUDeviceInfo | Get the device id | uint32_t GetDeviceID() const | `Context.gpu.device_id `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | Get the distribution rank id | int GetRankID() const | `Context.gpu.rank_id `__ | +| GPUDeviceInfo | Get the distribution rank id | int GetRankID() const | `Context.gpu.rank_id `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | Get the distribution group size | int GetGroupSize() const | `Context.gpu.group_size `__ | +| GPUDeviceInfo | Get the distribution group size | int GetGroupSize() const | `Context.gpu.group_size `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | Set the precision mode | void SetPrecisionMode(const std::string &precision_mode) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | Get the precision mode | std::string GetPrecisionMode() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | Set enables to perform the float16 inference | void SetEnableFP16(bool is_fp16) | `Context.gpu.precision_mode `__ | +| GPUDeviceInfo | Set enables to perform the float16 inference | void SetEnableFP16(bool is_fp16) | `Context.gpu.precision_mode `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | Get enables to perform the float16 inference | bool GetEnableFP16() const | `Context.gpu.precision_mode `__ | +| GPUDeviceInfo | Get enables to perform the float16 inference | bool GetEnableFP16() const | `Context.gpu.precision_mode `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | Set enables to sharing mem with OpenGL | void SetEnableGLTexture(bool is_enable_gl_texture) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -98,11 +98,11 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | Get current OpenGL display | void \*GetGLDisplay() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | Get the type of this DeviceInfoContext | enum DeviceType GetDeviceType() const | `Context.ascend `__ | +| AscendDeviceInfo | Get the type of this DeviceInfoContext | enum DeviceType GetDeviceType() const | `Context.ascend `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | Set device id | void SetDeviceID(uint32_t device_id) | `Context.ascend.device_id `__ | +| AscendDeviceInfo | Set device id | void SetDeviceID(uint32_t device_id) | `Context.ascend.device_id `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | Get the device id | uint32_t GetDeviceID() const | `Context.ascend.device_id `__ | +| AscendDeviceInfo | Get the device id | uint32_t GetDeviceID() const | `Context.ascend.device_id `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | Set AIPP configuration file path | void SetInsertOpConfigPath(const std::string &cfg_path) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -132,9 +132,9 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | Get type of model outputs | enum DataType GetOutputType() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | Set precision mode of model | void SetPrecisionMode(const std::string &precision_mode) | `Context.ascend.precision_mode `__ | +| AscendDeviceInfo | Set precision mode of model | void SetPrecisionMode(const std::string &precision_mode) | `Context.ascend.precision_mode `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | Get precision mode of model | std::string GetPrecisionMode() const | `Context.ascend.precision_mode `__ | +| AscendDeviceInfo | Get precision mode of model | std::string GetPrecisionMode() const | `Context.ascend.precision_mode `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | Set op select implementation mode | void SetOpSelectImplMode(const std::string &op_select_impl_mode) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -160,7 +160,7 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Build a model from model buffer so that it can run on a device | Status Build(const void \*model_data, size_t data_size, ModelType model_type, const std::shared_ptr &model_context = nullptr) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | Load and build a model from model buffer so that it can run on a device | Status Build(const std::string &model_path, ModelType model_type, const std::shared_ptr &model_context = nullptr) | `Model.build_from_file `__ | +| Model | Load and build a model from model buffer so that it can run on a device | Status Build(const std::string &model_path, ModelType model_type, const std::shared_ptr &model_context = nullptr) | `Model.build_from_file `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Build a model from model buffer so that it can run on a device | Status Build(const void \*model_data, size_t data_size, ModelType model_type, const std::shared_ptr &model_context, const Key &dec_key, const std::string &dec_mode, const std::string &cropto_lib_path) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -172,11 +172,11 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Build a Transfer Learning model where the backbone weights are fixed and the head weights are trainable | Status BuildTransferLearning(GraphCell backbone, GraphCell head, const std::shared_ptr &context, const std::shared_ptr &train_cfg = nullptr) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | Resize the shapes of inputs | Status Resize(const std::vector &inputs, const std::vector > &dims) | `Model.resize `__ | +| Model | Resize the shapes of inputs | Status Resize(const std::vector &inputs, const std::vector > &dims) | `Model.resize `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Change the size and or content of weight tensors | Status UpdateWeights(const std::vector &new_weights) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | Inference model API | Status Predict(const std::vector &inputs, std::vector \*outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.predict `__ | +| Model | Inference model API | Status Predict(const std::vector &inputs, std::vector \*outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.predict `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Inference model API only with callback | Status Predict(const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -188,11 +188,11 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Check if data preprocess exists in model | bool HasPreprocess() | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | Load config file | Status LoadConfig(const std::string &config_path) | Wrapped in the parameter `config_path` of `Model.build_from_file `__ | +| Model | Load config file | Status LoadConfig(const std::string &config_path) | Wrapped in the parameter `config_path` of `Model.build_from_file `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Update config | Status UpdateConfig(const std::string §ion, const std::pair &config) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | Obtains all input tensors of the model | std::vector GetInputs() | `Model.get_inputs `__ | +| Model | Obtains all input tensors of the model | std::vector GetInputs() | `Model.get_inputs `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Obtains the input tensor of the model by name | MSTensor GetInputByTensorName(const std::string &tensor_name) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -220,7 +220,7 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Accessor to TrainLoop metric objects | std::vector GetMetrics() | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | Obtains all output tensors of the model | std::vector GetOutputs() | Wrapped in the return value of `Model.predict `__ | +| Model | Obtains all output tensors of the model | std::vector GetOutputs() | Wrapped in the return value of `Model.predict `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Obtains names of all output tensors of the model | std::vector GetOutputTensorNames() | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -240,33 +240,33 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Check if the device supports the model | static bool CheckModelSupport(enum DeviceType device_type, ModelType model_type) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | Set the number of workers at runtime | void SetWorkersNum(int32_t workers_num) | `Context.parallel.workers_num `__ | +| RunnerConfig | Set the number of workers at runtime | void SetWorkersNum(int32_t workers_num) | `Context.parallel.workers_num `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | Get the current operators parallel workers number setting | int32_t GetWorkersNum() const | `Context.parallel.workers_num `__ | +| RunnerConfig | Get the current operators parallel workers number setting | int32_t GetWorkersNum() const | `Context.parallel.workers_num `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | Set the context at runtime | void SetContext(const std::shared_ptr &context) | Wrapped in `Context.parallel `__ | +| RunnerConfig | Set the context at runtime | void SetContext(const std::shared_ptr &context) | Wrapped in `Context.parallel `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | Get the current context setting | std::shared_ptr GetContext() const | Wrapped in `Context.parallel `__ | +| RunnerConfig | Get the current context setting | std::shared_ptr GetContext() const | Wrapped in `Context.parallel `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | Set the config before runtime | void SetConfigInfo(const std::string §ion, const std::map &config) | `Context.parallel.config_info `__ | +| RunnerConfig | Set the config before runtime | void SetConfigInfo(const std::string §ion, const std::map &config) | `Context.parallel.config_info `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | Get the current config setting | std::map> GetConfigInfo() const | `Context.parallel.config_info `__ | +| RunnerConfig | Get the current config setting | std::map> GetConfigInfo() const | `Context.parallel.config_info `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | Set the config path before runtime | void SetConfigPath(const std::string &config_path) | `Context.parallel.config_path `__ | +| RunnerConfig | Set the config path before runtime | void SetConfigPath(const std::string &config_path) | `Context.parallel.config_path `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | Get the current config path | std::string GetConfigPath() const | `Context.parallel.config_path `__ | +| RunnerConfig | Get the current config path | std::string GetConfigPath() const | `Context.parallel.config_path `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelParallelRunner | build a model parallel runner from model path so that it can run on a device | Status Init(const std::string &model_path, const std::shared_ptr &runner_config = nullptr) | `Model.parallel_runner.build_from_file `__ | +| ModelParallelRunner | build a model parallel runner from model path so that it can run on a device | Status Init(const std::string &model_path, const std::shared_ptr &runner_config = nullptr) | `Model.parallel_runner.build_from_file `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelParallelRunner | build a model parallel runner from model buffer so that it can run on a device | Status Init(const void \*model_data, const size_t data_size, const std::shared_ptr &runner_config = nullptr) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelParallelRunner | Obtains all input tensors information of the model | std::vector GetInputs() | `Model.parallel_runner.get_inputs `__ | +| ModelParallelRunner | Obtains all input tensors information of the model | std::vector GetInputs() | `Model.parallel_runner.get_inputs `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelParallelRunner | Obtains all output tensors information of the model | std::vector GetOutputs() | Wrapped in the return value of `Model.parallel_runner.predict `__ | +| ModelParallelRunner | Obtains all output tensors information of the model | std::vector GetOutputs() | Wrapped in the return value of `Model.parallel_runner.predict `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelParallelRunner | Inference ModelParallelRunner | Status Predict(const std::vector &inputs, std::vector \*outputs,const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.parallel_runner.predict `__ | +| ModelParallelRunner | Inference ModelParallelRunner | Status Predict(const std::vector &inputs, std::vector \*outputs,const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.parallel_runner.predict `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Creates a MSTensor object, whose data need to be copied before accessed by Model | static inline MSTensor \*CreateTensor(const std::string &name, DataType type, const std::vector &shape, const void \*data, size_t data_len) noexcept | `Tensor `__ | +| MSTensor | Creates a MSTensor object, whose data need to be copied before accessed by Model | static inline MSTensor \*CreateTensor(const std::string &name, DataType type, const std::vector &shape, const void \*data, size_t data_len) noexcept | `Tensor `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Creates a MSTensor object, whose data can be directly accessed by Model | static inline MSTensor \*CreateRefTensor(const std::string &name, DataType type, const std::vector &shape, const void \*data, size_t data_len, bool own_data = true) noexcept | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -280,19 +280,19 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Destroy an object created by `Clone` , `StringsToTensor` , `CreateRefTensor` or `CreateTensor` | static void DestroyTensorPtr(MSTensor \*tensor) noexcept | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Obtains the name of the MSTensor | std::string Name() const | `Tensor.name `__ | +| MSTensor | Obtains the name of the MSTensor | std::string Name() const | `Tensor.name `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Obtains the data type of the MSTensor | enum DataType DataType() const | `Tensor.dtype `__ | +| MSTensor | Obtains the data type of the MSTensor | enum DataType DataType() const | `Tensor.dtype `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Obtains the shape of the MSTensor | const std::vector &Shape() const | `Tensor.shape `__ | +| MSTensor | Obtains the shape of the MSTensor | const std::vector &Shape() const | `Tensor.shape `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Obtains the number of elements of the MSTensor | int64_t ElementNum() const | `Tensor.element_num `__ | +| MSTensor | Obtains the number of elements of the MSTensor | int64_t ElementNum() const | `Tensor.element_num `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Obtains a shared pointer to the copy of data of the MSTensor | std::shared_ptr Data() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Obtains the pointer to the data of the MSTensor | void \*MutableData() | Wrapped in `Tensor.get_data_to_numpy `__ and `Tensor.set_data_from_numpy `__ | +| MSTensor | Obtains the pointer to the data of the MSTensor | void \*MutableData() | Wrapped in `Tensor.get_data_to_numpy `__ and `Tensor.set_data_from_numpy `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Obtains the length of the data of the MSTensor, in bytes | size_t DataSize() const | `Tensor.data_size `__ | +| MSTensor | Obtains the length of the data of the MSTensor, in bytes | size_t DataSize() const | `Tensor.data_size `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Get whether the MSTensor data is const data | bool IsConst() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -308,19 +308,19 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Get the boolean value that indicates whether the MSTensor not equals tensor | bool operator!=(const MSTensor &tensor) const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Set the shape of for the MSTensor | void SetShape(const std::vector &shape) | `Tensor.shape `__ | +| MSTensor | Set the shape of for the MSTensor | void SetShape(const std::vector &shape) | `Tensor.shape `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Set the data type for the MSTensor | void SetDataType(enum DataType data_type) | `Tensor.dtype `__ | +| MSTensor | Set the data type for the MSTensor | void SetDataType(enum DataType data_type) | `Tensor.dtype `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Set the name for the MSTensor | void SetTensorName(const std::string &name) | `Tensor.name `__ | +| MSTensor | Set the name for the MSTensor | void SetTensorName(const std::string &name) | `Tensor.name `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Set the Allocator for the MSTensor | void SetAllocator(std::shared_ptr allocator) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Obtain the Allocator of the MSTensor | std::shared_ptr allocator() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Set the format for the MSTensor | void SetFormat(mindspore::Format format) | `Tensor.format `__ | +| MSTensor | Set the format for the MSTensor | void SetFormat(mindspore::Format format) | `Tensor.format `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Obtain the format of the MSTensor | mindspore::Format format() const | `Tensor.format `__ | +| MSTensor | Obtain the format of the MSTensor | mindspore::Format format() const | `Tensor.format `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Set the data for the MSTensor | void SetData(void \*data, bool own_data = true) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -332,15 +332,15 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Set the quantization parameters for the MSTensor | void SetQuantParams(std::vector quant_params) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelGroup | Construct a ModelGroup object and indicate shared workspace memory or shared weight memory, with default shared workspace memory | ModelGroup(ModelGroupFlag flags = ModelGroupFlag::kShareWorkspace) | `ModelGroup `__ | +| ModelGroup | Construct a ModelGroup object and indicate shared workspace memory or shared weight memory, with default shared workspace memory | ModelGroup(ModelGroupFlag flags = ModelGroupFlag::kShareWorkspace) | `ModelGroup `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelGroup | When sharing weight memory, add model objects that require shared weight memory | Status AddModel(const std::vector &model_list) | `ModelGroup.add_model `__ | +| ModelGroup | When sharing weight memory, add model objects that require shared weight memory | Status AddModel(const std::vector &model_list) | `ModelGroup.add_model `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelGroup | When sharing workspace memory, add the path of the model that requires shared workspace memory | Status AddModel(const std::vector &model_path_list) | `ModelGroup.add_model `__ | +| ModelGroup | When sharing workspace memory, add the path of the model that requires shared workspace memory | Status AddModel(const std::vector &model_path_list) | `ModelGroup.add_model `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelGroup | When sharing workspace memory, add a model buffer that requires shared workspace memory | Status AddModel(const std::vector> &model_buff_list) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelGroup | When sharing workspace memory, calculate the maximum workspace memory size | Status CalMaxSizeOfWorkspace(ModelType model_type, const std::shared_ptr &ms_context) | `ModelGroup.cal_max_size_of_workspace `__ | +| ModelGroup | When sharing workspace memory, calculate the maximum workspace memory size | Status CalMaxSizeOfWorkspace(ModelType model_type, const std::shared_ptr &ms_context) | `ModelGroup.cal_max_size_of_workspace `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/docs/lite/api/source_zh_cn/api_c/context_c.md b/docs/lite/api/source_zh_cn/api_c/context_c.md index 5e265507a5a4c1cc37fd482fbc57e7a65742b397..561b3623383fa085e32bdf2474d19c0686895ce3 100644 --- a/docs/lite/api/source_zh_cn/api_c/context_c.md +++ b/docs/lite/api/source_zh_cn/api_c/context_c.md @@ -1,6 +1,6 @@ # context_c -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_c/context_c.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_c/context_c.md) ```c #include @@ -198,7 +198,7 @@ MSDeviceInfoHandle MSDeviceInfoCreate(MSDeviceType device_type) 新建运行设备信息,若创建失败则会返回`nullptr`,并日志中输出信息。 - 参数 - - `device_type`: 设备类型,具体见[MSDeviceType](https://www.mindspore.cn/lite/api/zh-CN/master/api_c/types_c.html#msdevicetype)。 + - `device_type`: 设备类型,具体见[MSDeviceType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_c/types_c.html#msdevicetype)。 - 返回值 diff --git a/docs/lite/api/source_zh_cn/api_c/data_type_c.md b/docs/lite/api/source_zh_cn/api_c/data_type_c.md index ac6c4b6384887fd3d36aa458e855831904af7d07..a1d3e81f11939f435b8deeaecea8468f29d02a6f 100644 --- a/docs/lite/api/source_zh_cn/api_c/data_type_c.md +++ b/docs/lite/api/source_zh_cn/api_c/data_type_c.md @@ -1,6 +1,6 @@ # data_type_c -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_c/data_type_c.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_c/data_type_c.md) ```C #include diff --git a/docs/lite/api/source_zh_cn/api_c/format_c.md b/docs/lite/api/source_zh_cn/api_c/format_c.md index 3b57375f73ee2225e474b006a018edc8a550f79b..684ea81c68b866794242e9951a4df5f1b9661891 100644 --- a/docs/lite/api/source_zh_cn/api_c/format_c.md +++ b/docs/lite/api/source_zh_cn/api_c/format_c.md @@ -1,6 +1,6 @@ # format_c -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_c/format_c.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_c/format_c.md) ```C #include diff --git a/docs/lite/api/source_zh_cn/api_c/lite_c_example.rst b/docs/lite/api/source_zh_cn/api_c/lite_c_example.rst index 9def15a73ba9657997156d0608ff819af596b3df..868ea3a867405516c57f75fc40115c909bc69c97 100644 --- a/docs/lite/api/source_zh_cn/api_c/lite_c_example.rst +++ b/docs/lite/api/source_zh_cn/api_c/lite_c_example.rst @@ -4,4 +4,4 @@ .. toctree:: :maxdepth: 1 - 极简Demo↗ + 极简Demo↗ diff --git a/docs/lite/api/source_zh_cn/api_c/model_c.md b/docs/lite/api/source_zh_cn/api_c/model_c.md index 5822f01b7ade74f89e95e509b71742c68227f54a..6a15327d26f988435036f2553f6fb1355b92b68e 100644 --- a/docs/lite/api/source_zh_cn/api_c/model_c.md +++ b/docs/lite/api/source_zh_cn/api_c/model_c.md @@ -1,6 +1,6 @@ # model_c -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_c/model_c.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_c/model_c.md) ```C #include @@ -145,8 +145,8 @@ MSStatus MSModelBuild(MSModelHandle model, const void* model_data, size_t data_s - `model`: 指向模型对象的指针。 - `model_data`: 内存中已经加载的模型数据地址。 - `data_size`: 模型数据的长度。 - - `model_type`: 模型文件类型,具体见: [MSModelType](https://mindspore.cn/lite/api/zh-CN/master/api_c/types_c.html#msmodeltype)。 - - `model_context`: 模型的上下文环境,具体见: [Context](https://mindspore.cn/lite/api/zh-CN/master/api_c/context_c.html)。 + - `model_type`: 模型文件类型,具体见: [MSModelType](https://mindspore.cn/lite/api/zh-CN/r2.7.0/api_c/types_c.html#msmodeltype)。 + - `model_context`: 模型的上下文环境,具体见: [Context](https://mindspore.cn/lite/api/zh-CN/r2.7.0/api_c/context_c.html)。 - 返回值 @@ -165,8 +165,8 @@ MSStatus MSModelBuildFromFile(MSModelHandle model, const char* model_path, MSMod - `model`: 指向模型对象的指针。 - `model_path`: 模型文件路径。 - - `model_type`: 模型文件类型,具体见: [MSModelType](https://mindspore.cn/lite/api/zh-CN/master/api_c/types_c.html#msmodeltype)。 - - `model_context`: 模型的上下文环境,具体见: [Context](https://mindspore.cn/lite/api/zh-CN/master/api_c/context_c.html)。 + - `model_type`: 模型文件类型,具体见: [MSModelType](https://mindspore.cn/lite/api/zh-CN/r2.7.0/api_c/types_c.html#msmodeltype)。 + - `model_context`: 模型的上下文环境,具体见: [Context](https://mindspore.cn/lite/api/zh-CN/r2.7.0/api_c/context_c.html)。 - 返回值 diff --git a/docs/lite/api/source_zh_cn/api_c/tensor_c.md b/docs/lite/api/source_zh_cn/api_c/tensor_c.md index bf50fa6d563e667d5c74de6af3a7ad405e85db39..ac6140e8b90924b8fb878ca65cbf7384837f8016 100644 --- a/docs/lite/api/source_zh_cn/api_c/tensor_c.md +++ b/docs/lite/api/source_zh_cn/api_c/tensor_c.md @@ -1,6 +1,6 @@ # tensor_c -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_c/tensor_c.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_c/tensor_c.md) ```C #include @@ -123,7 +123,7 @@ void MSTensorSetDataType(MSTensorHandle tensor, MSDataType type) MSDataType MSTensorGetDataType(const MSTensorHandle tensor) ``` -获取MSTensor的数据类型,具体数据类型见[MSDataType](https://www.mindspore.cn/lite/api/zh-CN/master/api_c/data_type_c.html#msdatatype)。 +获取MSTensor的数据类型,具体数据类型见[MSDataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_c/data_type_c.html#msdatatype)。 - 参数 - `tensor`: 指向MSTensor的指针。 @@ -171,7 +171,7 @@ void MSTensorSetFormat(MSTensorHandle tensor, MSFormat format) - 参数 - `tensor`: 指向MSTensor的指针。 - - `format`: 张量的数据排列,具体见[MSFormat](https://www.mindspore.cn/lite/api/zh-CN/master/api_c/format_c.html#msformat)。 + - `format`: 张量的数据排列,具体见[MSFormat](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_c/format_c.html#msformat)。 ### MSTensorGetFormat @@ -183,7 +183,7 @@ MSFormat MSTensorGetFormat(const MSTensorHandle tensor) - 返回值 - 张量的数据排列,具体见[MSFormat](https://www.mindspore.cn/lite/api/zh-CN/master/api_c/format_c.html#msformat)。 + 张量的数据排列,具体见[MSFormat](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_c/format_c.html#msformat)。 ### MSTensorSetData diff --git a/docs/lite/api/source_zh_cn/api_c/types_c.md b/docs/lite/api/source_zh_cn/api_c/types_c.md index c9f2421eba7911ad44696fbc5eb81af2f6d9b337..60bd19d91b1a07f0a6d24a695ca67f5de679e1f6 100644 --- a/docs/lite/api/source_zh_cn/api_c/types_c.md +++ b/docs/lite/api/source_zh_cn/api_c/types_c.md @@ -1,6 +1,6 @@ # types_c -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_c/types_c.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_c/types_c.md) ```C #include diff --git a/docs/lite/api/source_zh_cn/api_cpp/lite_cpp_example.rst b/docs/lite/api/source_zh_cn/api_cpp/lite_cpp_example.rst index ecdf9d26248719343aa45cd2d7217615cced2eb9..ef2640b7ba8318ac00fe3dad15506f6c21dbf1b6 100644 --- a/docs/lite/api/source_zh_cn/api_cpp/lite_cpp_example.rst +++ b/docs/lite/api/source_zh_cn/api_cpp/lite_cpp_example.rst @@ -4,6 +4,6 @@ .. toctree:: :maxdepth: 1 - 极简Demo↗ - 基于JNI接口的Android应用开发↗ - 高阶用法↗ \ No newline at end of file + 极简Demo↗ + 基于JNI接口的Android应用开发↗ + 高阶用法↗ \ No newline at end of file diff --git a/docs/lite/api/source_zh_cn/api_cpp/mindspore.md b/docs/lite/api/source_zh_cn/api_cpp/mindspore.md index 4e6bc8c3b0c2c329bc9861ac53a460a0642d4d90..132653aa999569b6956142ef837295768d716542 100644 --- a/docs/lite/api/source_zh_cn/api_cpp/mindspore.md +++ b/docs/lite/api/source_zh_cn/api_cpp/mindspore.md @@ -1,6 +1,6 @@ # mindspore -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_cpp/mindspore.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_cpp/mindspore.md) ## 接口汇总 @@ -36,8 +36,8 @@ |--------------------------------------------------|---------------------------------------------------|--------|--------| | [MSTensor](#mstensor) | MindSpore中的张量。 | √ | √ | | [QuantParam](#quantparam) | MSTensor中的一组量化参数。 | √ | √ | -| [DataType](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_datatype.html) | MindSpore MSTensor保存的数据支持的类型。 | √ | √ | -| [Format](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_format.html) | MindSpore MSTensor保存的数据支持的排列格式。 | √ | √ | +| [DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_datatype.html) | MindSpore MSTensor保存的数据支持的类型。 | √ | √ | +| [Format](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_format.html) | MindSpore MSTensor保存的数据支持的排列格式。 | √ | √ | | [Allocator](#allocator-1) | 内存管理基类。 | √ | √ | ### 模型分组 @@ -117,7 +117,7 @@ ## Context -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/context.h)> Context类用于保存执行中的环境变量。 @@ -155,9 +155,9 @@ Context的数据。 | [bool GetEnableParallel() const](#getenableparallel) | ✕ | √ | | [void SetBuiltInDelegate(DelegateMode mode)](#setbuiltindelegate) | ✕ | √ | | [DelegateMode GetBuiltInDelegate() const](#getbuiltindelegate) | ✕ | √ | -| [void set_delegate(const std::shared_ptr\ &delegate)](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#set-delegate) | ✕ | √ | +| [void set_delegate(const std::shared_ptr\ &delegate)](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#set-delegate) | ✕ | √ | | [void SetDelegate(const std::shared_ptr\ &delegate)](#setdelegate) | ✕ | √ | -| [std::shared_ptr\ get_delegate() const](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#get-delegate) | ✕ | √ | +| [std::shared_ptr\ get_delegate() const](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#get-delegate) | ✕ | √ | | [std::shared_ptr\ GetDelegate() const](#getdelegate) | ✕ | √ | | [void SetMultiModalHW(bool float_mode)](#setmultimodalhw) | ✕ | √ | | [bool GetMultiModalHW() const](#getmultimodalhw) | ✕ | √ | @@ -405,7 +405,7 @@ std::vector> &MutableDeviceInfo() ## DelegateMode -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/context.h)> ```cpp enum DelegateMode { @@ -418,7 +418,7 @@ Delegate模式。 ## DeviceInfoContext -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/context.h)> DeviceInfoContext类定义不同硬件设备的环境信息。 @@ -549,7 +549,7 @@ std::shared_ptr GetAllocator() const ## CPUDeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/context.h)> 派生自[DeviceInfoContext](#deviceinfocontext),模型运行在CPU上的配置。 @@ -594,7 +594,7 @@ bool GetEnableFP16() const ## GPUDeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/context.h)> 派生自[DeviceInfoContext](#deviceinfocontext),模型运行在GPU上的配置。 @@ -781,7 +781,7 @@ void *GetGLDisplay() const ## KirinNPUDeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/context.h)> 派生自[DeviceInfoContext](#deviceinfocontext),模型运行在NPU上的配置。 @@ -797,7 +797,7 @@ void *GetGLDisplay() const ## AscendDeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/context.h)> 派生自[DeviceInfoContext](#deviceinfocontext),模型运行在Atlas 200/300/500推理产品、Atlas推理系列产品上的配置。 @@ -849,7 +849,7 @@ using Key = struct MS_API Key { ## Serialization -\#include <[serialization.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/serialization.h)> +\#include <[serialization.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/serialization.h)> Serialization类汇总了模型文件读写的方法。 @@ -1119,7 +1119,7 @@ Buffer Clone() const; ## Model -\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/model.h)> +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/model.h)> Model定义了MindSpore中的模型,便于计算图管理。 @@ -1890,7 +1890,7 @@ Status Finalize(); ## MSTensor -\#include <[types.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/types.h)> +\#include <[types.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/types.h)> `MSTensor`定义了MindSpore中的张量。 @@ -2085,10 +2085,10 @@ void DestroyTensorPtr(MSTensor *tensor) noexcept; | [bool IsConst() const](#isconst) | √ | √ | | [bool IsDevice() const](#isdevice) | √ | ✕ | | [MSTensor *Clone() const](#clone) | √ | √ | -| [bool operator==(std::nullptr_t) const](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#operatorstd-nullptr-t) | √ | √ | -| [bool operator!=(std::nullptr_t) const](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#operatorstd-nullptr-t-1) | √ | √ | -| [bool operator!=(const MSTensor &tensor) const](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#operatorconst-mstensor-tensor) | √ | √ | -| [bool operator==(const MSTensor &tensor) const](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#operatorconst-mstensor-tensor-1) | √ | √ | +| [bool operator==(std::nullptr_t) const](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#operatorstd-nullptr-t) | √ | √ | +| [bool operator!=(std::nullptr_t) const](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#operatorstd-nullptr-t-1) | √ | √ | +| [bool operator!=(const MSTensor &tensor) const](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#operatorconst-mstensor-tensor) | √ | √ | +| [bool operator==(const MSTensor &tensor) const](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#operatorconst-mstensor-tensor-1) | √ | √ | | [void SetShape(const std::vector\ &shape)](#setshape) | √ | √ | | [void SetDataType(enum DataType data_type)](#setdatatype) | √ | √ | | [void SetTensorName(const std::string &name)](#settensorname) | √ | √ | @@ -2414,7 +2414,7 @@ const std::shared_ptr impl() ## QuantParam -\#include <[types.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/types.h)> +\#include <[types.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/types.h)> 一个结构体。QuantParam定义了MSTensor的一组量化参数。 @@ -2462,7 +2462,7 @@ max ## MSKernelCallBack -\#include <[types.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/types.h)> +\#include <[types.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/types.h)> ```cpp using MSKernelCallBack = std::function &inputs, const std::vector &outputs, const MSCallBackParam &opInfo)> @@ -2472,7 +2472,7 @@ using MSKernelCallBack = std::function &inputs, ## MSCallBackParam -\#include <[types.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/types.h)> +\#include <[types.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/types.h)> 一个结构体。MSCallBackParam定义了回调函数的输入参数。 @@ -2504,7 +2504,7 @@ execute_time ## Delegate -\#include <[delegate.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/delegate.h)> +\#include <[delegate.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/delegate.h)> `Delegate`定义了第三方AI框架接入MindSpore Lite的代理接口。 @@ -2591,7 +2591,7 @@ void ReplaceNodes(const std::shared_ptr &graph) override {} ## CoreMLDelegate -\#include <[delegate.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/delegate.h)> +\#include <[delegate.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/delegate.h)> `CoreMLDelegate`继承自`Delegate`类,定义了CoreML框架接入MindSpore Lite的代理接口。 @@ -2633,7 +2633,7 @@ CoreMLDelegate在线构图,仅在内部图编译阶段调用。 ## SchemaVersion -\#include <[delegate.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/delegate.h)> +\#include <[delegate.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/delegate.h)> 定义了MindSpore Lite执行在线推理时模型文件的版本。 @@ -2647,9 +2647,9 @@ typedef enum { ## KernelIter -\#include <[delegate.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/delegate.h)> +\#include <[delegate.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/delegate.h)> -定义了MindSpore Lite [Kernel](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_kernel.html#mindspore-kernel)列表的迭代器。 +定义了MindSpore Lite [Kernel](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_kernel.html#mindspore-kernel)列表的迭代器。 ```cpp using KernelIter = std::vector::iterator; @@ -2657,7 +2657,7 @@ using KernelIter = std::vector::iterator; ## DelegateModel -\#include <[delegate.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/delegate.h)> +\#include <[delegate.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/delegate.h)> `DelegateModel`定义了MindSpore Lite Delegate机制操作的的模型对象。 @@ -2683,7 +2683,7 @@ DelegateModel(std::vector *kernels, const std::vector *kernels_; ``` -[**Kernel**](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_kernel.html#kernel)的列表,保存模型的所有算子。 +[**Kernel**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_kernel.html#kernel)的列表,保存模型的所有算子。 #### inputs_ @@ -2691,7 +2691,7 @@ std::vector *kernels_; const std::vector &inputs_; ``` -[**MSTensor**](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#mstensor)的列表,保存这个算子的输入tensor。 +[**MSTensor**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#mstensor)的列表,保存这个算子的输入tensor。 #### outputs_ @@ -2699,7 +2699,7 @@ const std::vector &inputs_; const std::vector &outputs; ``` -[**MSTensor**](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#mstensor)的列表,保存这个算子的输出tensor。 +[**MSTensor**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#mstensor)的列表,保存这个算子的输出tensor。 #### primitives_ @@ -2707,7 +2707,7 @@ const std::vector &outputs; const std::map &primitives_; ``` -[**Kernel**](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_kernel.html#kernel)和**schema::Primitive**的Map,保存所有算子的属性。 +[**Kernel**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_kernel.html#kernel)和**schema::Primitive**的Map,保存所有算子的属性。 #### version_ @@ -2799,7 +2799,7 @@ const std::vector &inputs() - 返回值 - [**MSTensor**](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#mstensor)的列表。 + [**MSTensor**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#mstensor)的列表。 #### outputs @@ -2811,7 +2811,7 @@ const std::vector &outputs() - 返回值 - [**MSTensor**](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#mstensor)的列表。 + [**MSTensor**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#mstensor)的列表。 #### GetVersion @@ -2827,7 +2827,7 @@ const SchemaVersion GetVersion() ## AbstractDelegate -\#include <[delegate_api.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/delegate_api.h)> +\#include <[delegate_api.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/delegate_api.h)> `AbstractDelegate`定义了MindSpore Lite 创建Delegate(抽象类)。 @@ -2883,7 +2883,7 @@ std::vector outputs_ std::vector outputs_ ``` -\#include <[delegate_api.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/delegate_api.h)> +\#include <[delegate_api.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/delegate_api.h)> `IDelegate`定义了MindSpore Lite 创建Delegate(模板类)。 @@ -2929,7 +2929,7 @@ virtual std::shared_ptr CreateKernel(const std::shared_ptr &node) ## TrainCfg -\#include <[cfg.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/cfg.h)> +\#include <[cfg.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/cfg.h)> `TrainCfg`MindSpore Lite训练的相关配置参数。 @@ -3018,7 +3018,7 @@ inline void SetLossName(const std::vector &loss_name); ## MixPrecisionCfg -\#include <[cfg.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/cfg.h)> +\#include <[cfg.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/cfg.h)> `MixPrecisionCfg`MindSpore Lite训练混合精度配置类。 @@ -3082,7 +3082,7 @@ bool keep_batchnorm_fp32_ = true; ## AccuracyMetrics -\#include <[accuracy.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/metrics/accuracy.h)> +\#include <[accuracy.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/metrics/accuracy.h)> `AccuracyMetrics`MindSpore Lite训练精度类。 @@ -3130,7 +3130,7 @@ float Eval() override; ## Metrics -\#include <[metrics.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/metrics/metrics.h)> +\#include <[metrics.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/metrics/metrics.h)> `Metrics`MindSpore Lite训练指标类。 @@ -3177,7 +3177,7 @@ virtual void Update(std::vector inputs, std::vector outp ## TrainCallBack -\#include <[callback.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/callback/callback.h)> +\#include <[callback.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/callback/callback.h)> `Metrics`MindSpore Lite训练回调类。 @@ -3276,7 +3276,7 @@ virtual void Begin(const TrainCallBackData &cb_data) {} ## TrainCallBackData -\#include <[callback.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/callback/callback.h)> +\#include <[callback.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/callback/callback.h)> 一个结构体。TrainCallBackData定义了训练回调的一组参数。 @@ -3316,7 +3316,7 @@ model_ ## CkptSaver -\#include <[ckpt_saver.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/callback/ckpt_saver.h)> +\#include <[ckpt_saver.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/callback/ckpt_saver.h)> `Metrics`MindSpore Lite训练模型文件保存类。 @@ -3329,7 +3329,7 @@ model_ ## LossMonitor -\#include <[loss_monitor.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/callback/loss_monitor.h)> +\#include <[loss_monitor.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/callback/loss_monitor.h)> `Metrics`MindSpore Lite训练损失函数类。 @@ -3356,7 +3356,7 @@ model_ ## LRScheduler -\#include <[lr_scheduler.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/callback/lr_scheduler.h)> +\#include <[lr_scheduler.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/callback/lr_scheduler.h)> `Metrics`MindSpore Lite训练学习率调度类。 @@ -3369,7 +3369,7 @@ model_ ## StepLRLambda -\#include <[lr_scheduler.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/callback/lr_scheduler.h)> +\#include <[lr_scheduler.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/callback/lr_scheduler.h)> 一个结构体。StepLRLambda定义了训练学习率的一组参数。 @@ -3393,7 +3393,7 @@ gamma ## MultiplicativeLRLambda -\#include <[lr_scheduler.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/callback/lr_scheduler.h)> +\#include <[lr_scheduler.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/callback/lr_scheduler.h)> 每个epoch将学习率乘以一个因子。 @@ -3421,7 +3421,7 @@ int MultiplicativeLRLambda(float *lr, int epoch, void *multiplication) ## TimeMonitor -\#include <[time_monitor.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/callback/time_monitor.h)> +\#include <[time_monitor.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/callback/time_monitor.h)> `Metrics`MindSpore Lite训练时间监测类。 @@ -3467,7 +3467,7 @@ int MultiplicativeLRLambda(float *lr, int epoch, void *multiplication) ## TrainAccuracy -\#include <[train_accuracy.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/callback/train_accuracy.h)> +\#include <[train_accuracy.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/callback/train_accuracy.h)> `Metrics`MindSpore Lite训练学习率调度类。 @@ -3526,7 +3526,7 @@ std::vector CharVersion() |-----------------------|--------|--------| | [std::string Version()](#version) | ✕ | √ | -\#include <[types.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/types.h)> +\#include <[types.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/types.h)> ```cpp std::string Version() @@ -3540,7 +3540,7 @@ std::string Version() ## Allocator -\#include <[allocator.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/allocator.h)> +\#include <[allocator.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/allocator.h)> 内存管理基类。 @@ -3694,11 +3694,11 @@ inline Status(const StatusCode code, int line_of_code, const char *file_name, co | [inline std::string GetErrDescription() const](#geterrdescription) | √ | √ | | [inline std::string SetErrDescription(const std::string &err_description)](#seterrdescription) | √ | √ | | [inline void SetStatusMsg(const std::string &status_msg)](#setstatusmsg) | √ | √ | -| [friend std::ostream &operator\<\<(std::ostream &os, const Status &s)](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#operator< Construct(const std::vector &inputs) {return ## Cell -\#include <[cell.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/cell.h)> +\#include <[cell.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/cell.h)> ### 析构函数 @@ -4187,7 +4187,7 @@ std::vector operator()(const std::vector &inputs) const; ## GraphCell -\#include <[cell.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/cell.h)> +\#include <[cell.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/cell.h)> ### 构造函数和析构函数 @@ -4283,7 +4283,7 @@ Status Load(uint32_t device_id); ## RunnerConfig -\#include <[model_parallel_runner.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/model_parallel_runner.h)> +\#include <[model_parallel_runner.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/model_parallel_runner.h)> RunnerConfig定义了ModelParallelRunner中使用的配置选项参数。 @@ -4432,7 +4432,7 @@ std::vector GetDeviceIds() const ## ModelParallelRunner -\#include <[model_parallel_runner.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/model_parallel_runner.h)> +\#include <[model_parallel_runner.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/model_parallel_runner.h)> ModelParallelRunner定义了MindSpore的多个Model以及并发策略,便于多个Model的调度与管理。 @@ -4534,7 +4534,7 @@ std::vector GetOutputs() ## ModelGroup -\#include <[model_group.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/model_group.h)> +\#include <[model_group.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/model_group.h)> ModelGroup 类定义MindSpore Lite模型分组信息,用于共享工作空间(Workspace)内存或者权重(包括常量和变量)内存。 diff --git a/docs/lite/api/source_zh_cn/api_cpp/mindspore_converter.md b/docs/lite/api/source_zh_cn/api_cpp/mindspore_converter.md index cb0952ca9b4640d8b64559a25d36b5154ee641f4..7dfc7df33c569ea01b66e699f59a929ffe52b434 100644 --- a/docs/lite/api/source_zh_cn/api_cpp/mindspore_converter.md +++ b/docs/lite/api/source_zh_cn/api_cpp/mindspore_converter.md @@ -1,6 +1,6 @@ # mindspore::converter -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_cpp/mindspore_converter.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_cpp/mindspore_converter.md) 以下描述了MindSpore Lite转换支持的模型类型及用户扩展所需的必要信息。 @@ -17,7 +17,7 @@ ## FmkType -\#include <[converter_context.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/converter_context.h)> +\#include <[converter_context.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/converter_context.h)> **enum**类型变量,定义MindSpore Lite转换支持的框架类型。 @@ -32,7 +32,7 @@ ## ConverterParameters -\#include <[converter_context.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/converter_context.h)> +\#include <[converter_context.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/converter_context.h)> **struct**类型结构体,定义模型解析时的转换参数,用于模型解析时的只读参数。 @@ -47,7 +47,7 @@ struct ConverterParameters { ## ConverterContext -\#include <[converter_context.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/converter_context.h)> +\#include <[converter_context.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/converter_context.h)> 模型转换过程中,基本信息的设置与获取。 @@ -113,7 +113,7 @@ static std::map GetConfigInfo(const std::string §i ## NodeParser -\#include <[node_parser.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/node_parser.h)> +\#include <[node_parser.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/node_parser.h)> op节点的解析基类。 @@ -216,7 +216,7 @@ tflite节点解析接口函数。 ## NodeParserPtr -\#include <[node_parser.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/node_parser.h)> +\#include <[node_parser.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/node_parser.h)> NodeParser类的共享智能指针类型。 @@ -226,7 +226,7 @@ using NodeParserPtr = std::shared_ptr; ## ModelParser -\#include <[model_parser.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/model_parser.h)> +\#include <[model_parser.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/model_parser.h)> 解析原始模型的基类。 @@ -258,7 +258,7 @@ api::FuncGraphPtr Parse(const converter::ConverterParameters &flags); - 参数 - - `flags`: 解析模型时基本信息,具体见[ConverterParameters](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_converter.html#converterparameters)。 + - `flags`: 解析模型时基本信息,具体见[ConverterParameters](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#converterparameters)。 - 返回值 diff --git a/docs/lite/api/source_zh_cn/api_cpp/mindspore_datatype.md b/docs/lite/api/source_zh_cn/api_cpp/mindspore_datatype.md index bdbd5f6227614362b4a97bc3b85b7b950caf0b32..a6939c5fced159ef683726e85a84372d90314cb9 100644 --- a/docs/lite/api/source_zh_cn/api_cpp/mindspore_datatype.md +++ b/docs/lite/api/source_zh_cn/api_cpp/mindspore_datatype.md @@ -1,6 +1,6 @@ # DataType -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_cpp/mindspore_datatype.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_cpp/mindspore_datatype.md) 以下表格描述了MindSpore MSTensor保存的数据支持的类型。 diff --git a/docs/lite/api/source_zh_cn/api_cpp/mindspore_format.md b/docs/lite/api/source_zh_cn/api_cpp/mindspore_format.md index 95a410d325ddd0a983231b34275aa33d3d94ceac..ae470ed876a46a56111c3edff4e99a479136e3f5 100644 --- a/docs/lite/api/source_zh_cn/api_cpp/mindspore_format.md +++ b/docs/lite/api/source_zh_cn/api_cpp/mindspore_format.md @@ -1,6 +1,6 @@ # mindspore::Format -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_cpp/mindspore_format.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_cpp/mindspore_format.md) 以下表格描述了MindSpore MSTensor保存的数据支持的排列格式。 diff --git a/docs/lite/api/source_zh_cn/api_cpp/mindspore_kernel.md b/docs/lite/api/source_zh_cn/api_cpp/mindspore_kernel.md index 561efb0c9bbd9d1748f59863a63c685d5104965e..8e92660ee6c5499ac962b841699c88e88c2392c2 100644 --- a/docs/lite/api/source_zh_cn/api_cpp/mindspore_kernel.md +++ b/docs/lite/api/source_zh_cn/api_cpp/mindspore_kernel.md @@ -1,6 +1,6 @@ # mindspore::kernel -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_cpp/mindspore_kernel.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_cpp/mindspore_kernel.md) ## 接口汇总 @@ -13,7 +13,7 @@ ## Kernel -\#include <[kernel.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/kernel.h)> +\#include <[kernel.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/kernel.h)> Kernel是算子实现的基类,定义了几个必须实现的接口。继承自IKernel。 @@ -32,13 +32,13 @@ Kernel的默认与带参构造函数,构造Kernel实例。 - 参数 - - `inputs`: 算子输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#mstensor)。 + - `inputs`: 算子输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#mstensor)。 - - `outputs`: 算子输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#mstensor)。 + - `outputs`: 算子输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#mstensor)。 - `primitive`: 算子经由flatbuffers反序化为Primitive后的结果。 - - `ctx`: 算子的上下文[Context](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#context)。 + - `ctx`: 算子的上下文[Context](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#context)。 ### 析构函数 @@ -59,7 +59,7 @@ virtual int InferShape() ``` 在用户调用`Model::Build`接口时,或是模型推理中需要推理算子形状时,会调用到该接口。 -在自定义算子场景中,用户可以覆写该接口,实现自定义算子的形状推理逻辑。详见[自定义算子章节](https://www.mindspore.cn/lite/docs/zh-CN/master/advanced/third_party/register_kernel.html)。 +在自定义算子场景中,用户可以覆写该接口,实现自定义算子的形状推理逻辑。详见[自定义算子章节](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/advanced/third_party/register_kernel.html)。 在`InferShape`函数中,一般需要实现算子的形状、数据类型和数据排布的推理逻辑。 - 返回值 @@ -84,7 +84,7 @@ virtual schema::QuantType quant_type() ## KernelInterface -\#include <[kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/kernel_interface.h)> +\#include <[kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/kernel_interface.h)> 算子扩展能力基类。 @@ -117,9 +117,9 @@ virtual Status Infer(std::vector *inputs, std::vector *inputs, std::vector *inputs, std::vector +\#include <[kernel.h](https://gitee.com/mindspore/mindspore/blob/v2.7.0/include/api/kernel_api.h)> Mindspore Kernel 算子类。是IKernel的父类。 @@ -185,7 +185,7 @@ virtual int InferShape() ``` 在用户调用`Model::Build`接口时,或是模型推理中需要推理算子形状时,会调用到该接口。 -在自定义算子场景中,用户可以覆写该接口,实现自定义算子的形状推理逻辑。详见[自定义算子章节](https://www.mindspore.cn/lite/docs/zh-CN/master/advanced/third_party/register_kernel.html)。 +在自定义算子场景中,用户可以覆写该接口,实现自定义算子的形状推理逻辑。详见[自定义算子章节](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/advanced/third_party/register_kernel.html)。 在`InferShape`函数中,一般需要实现算子的形状、数据类型和数据排布的推理逻辑。 - 返回值 @@ -239,7 +239,7 @@ virtual void set_inputs(const std::vector &in_tensors) { th - 参数 - - `in_tensors`: 算子的所有输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#mstensor)列表。 + - `in_tensors`: 算子的所有输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#mstensor)列表。 #### set_input @@ -251,7 +251,7 @@ virtual void set_input(mindspore::MSTensor in_tensor, int index) { this->inputs_ - 参数 - - `in_tensor`: 算子的输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#mstensor)。 + - `in_tensor`: 算子的输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#mstensor)。 - `index`: 算子输入在所有输入中的下标,从0开始计数。 @@ -265,7 +265,7 @@ virtual void set_outputs(const std::vector &out_tensors) { - 参数 - - `out_tensor`: 算子的所有输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#mstensor)列表。 + - `out_tensor`: 算子的所有输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#mstensor)列表。 #### set_output @@ -277,7 +277,7 @@ virtual void set_output(mindspore::MSTensor out_tensor, int index) { this->outpu - 参数 - - `out_tensor`: 算子的输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#mstensor)。 + - `out_tensor`: 算子的输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#mstensor)。 - `index`: 算子输出在所有输出中的下标,从0开始计数。 @@ -287,7 +287,7 @@ virtual void set_output(mindspore::MSTensor out_tensor, int index) { this->outpu virtual const std::vector &inputs() ``` -返回算子的所有输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#mstensor)列表。 +返回算子的所有输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#mstensor)列表。 - 返回值 @@ -299,7 +299,7 @@ virtual const std::vector &inputs() virtual const std::vector &outputs() ``` -返回算子的所有输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#mstensor)列表。 +返回算子的所有输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#mstensor)列表。 - 返回值 @@ -335,7 +335,7 @@ void set_name(const std::string &name) const lite::Context *context() const ``` -返回算子对应的[Context](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#context)。 +返回算子对应的[Context](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#context)。 - 返回值 diff --git a/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry.md b/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry.md index c08c29c9d2878618ac95fcab968a46f52f1514e1..8c6986bc542d1b09c49a8eff15a2869307f92271 100644 --- a/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry.md +++ b/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry.md @@ -1,32 +1,32 @@ # mindspore::registry -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry.md) ## 接口汇总 | 类名 | 描述 | | --- | --- | | [NodeParserRegistry](#nodeparserregistry) | 扩展Node解析的注册类。| -| [REG_NODE_PARSER](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_registry.html#reg-node-parser) | 注册扩展Node解析。| +| [REG_NODE_PARSER](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#reg-node-parser) | 注册扩展Node解析。| | [ModelParserRegistry](#modelparserregistry) | 扩展Model解析的注册类。| -| [REG_MODEL_PARSER](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_registry.html#reg-model-parser) | 注册扩展Model解析。| +| [REG_MODEL_PARSER](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#reg-model-parser) | 注册扩展Model解析。| | [PassBase](#passbase) | Pass的基类。| | [PassPosition](#passposition) | 扩展Pass的运行位置。| | [PassRegistry](#passregistry) | 扩展Pass注册构造类。| -| [REG_PASS](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_registry.html#reg-pass) | 注册扩展Pass。| -| [REG_SCHEDULED_PASS](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_registry.html#reg-scheduled-pass) | 注册扩展Pass的调度顺序。| +| [REG_PASS](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#reg-pass) | 注册扩展Pass。| +| [REG_SCHEDULED_PASS](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#reg-scheduled-pass) | 注册扩展Pass的调度顺序。| | [RegisterKernel](#registerkernel) | 算子注册实现类。| | [KernelReg](#kernelreg) | 算子注册构造类。| -| [REGISTER_KERNEL](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_registry.html#register-kernel) | 注册算子。| -| [REGISTER_CUSTOM_KERNEL](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_registry.html#register-custom-kernel) | 注册Custom算子注册。| +| [REGISTER_KERNEL](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#register-kernel) | 注册算子。| +| [REGISTER_CUSTOM_KERNEL](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#register-custom-kernel) | 注册Custom算子注册。| | [RegisterKernelInterface](#registerkernelinterface) | 算子扩展能力注册实现类。| | [KernelInterfaceReg](#kernelinterfacereg) | 算子扩展能力注册构造类。| -| [REGISTER_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_registry.html#register-kernel-interface) | 注册算子扩展能力。| -| [REGISTER_CUSTOM_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_registry.html#register-custom-kernel-interface) | 注册Custom算子扩展能力。| +| [REGISTER_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#register-kernel-interface) | 注册算子扩展能力。| +| [REGISTER_CUSTOM_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_registry.html#register-custom-kernel-interface) | 注册Custom算子扩展能力。| ## NodeParserRegistry -\#include <[node_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/node_parser_registry.h)> +\#include <[node_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/node_parser_registry.h)> NodeParserRegistry类用于注册及获取NodeParser类型的共享智能指针。 @@ -41,11 +41,11 @@ NodeParserRegistry(converter::FmkType fmk_type, const std::string &node_type, - 参数 - - `fmk_type`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_converter.html#fmktype)说明。 + - `fmk_type`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#fmktype)说明。 - `node_type`: 节点的类型。 - - `node_parser`: NodeParser类型的共享智能指针实例, 具体见[NodeParserPtr](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_converter.html#nodeparserptr)说明。 + - `node_parser`: NodeParser类型的共享智能指针实例, 具体见[NodeParserPtr](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#nodeparserptr)说明。 ### ~NodeParserRegistry @@ -67,13 +67,13 @@ static converter::NodeParserPtr GetNodeParser(converter::FmkType fmk_type, const - 参数 - - `fmk_type`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_converter.html#fmktype)说明。 + - `fmk_type`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#fmktype)说明。 - `node_type`: 节点的类型。 ## REG_NODE_PARSER -\#include <[node_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/node_parser_registry.h)> +\#include <[node_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/node_parser_registry.h)> ```c++ #define REG_NODE_PARSER(fmk_type, node_type, node_parser) @@ -83,25 +83,25 @@ static converter::NodeParserPtr GetNodeParser(converter::FmkType fmk_type, const - 参数 - - `fmk_type`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_converter.html#fmktype)说明。 + - `fmk_type`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#fmktype)说明。 - `node_type`: 节点的类型。 - - `node_parser`: NodeParser类型的共享智能指针实例, 具体见[NodeParserPtr](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_converter.html#nodeparserptr)说明。 + - `node_parser`: NodeParser类型的共享智能指针实例, 具体见[NodeParserPtr](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#nodeparserptr)说明。 ## ModelParserCreator -\#include <[model_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/model_parser_registry.h)> +\#include <[model_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/model_parser_registry.h)> ```c++ typedef converter::ModelParser *(*ModelParserCreator)() ``` -创建[ModelParser](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_converter.html#modelparser)的函数原型声明。 +创建[ModelParser](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#modelparser)的函数原型声明。 ## ModelParserRegistry -\#include <[model_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/model_parser_registry.h)> +\#include <[model_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/model_parser_registry.h)> ModelParserRegistry类用于注册及获取ModelParserCreator类型的函数指针。 @@ -115,7 +115,7 @@ ModelParserRegistry(FmkType fmk, ModelParserCreator creator) - 参数 - - `fmk`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_converter.html#fmktype)说明。 + - `fmk`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#fmktype)说明。 - `creator`: ModelParserCreator类型的函数指针, 具体见[ModelParserCreator](#modelparsercreator)说明。 @@ -139,11 +139,11 @@ static ModelParser *GetModelParser(FmkType fmk) - 参数 - - `fmk`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_converter.html#fmktype)说明。 + - `fmk`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#fmktype)说明。 ## REG_MODEL_PARSER -\#include <[model_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/model_parser_registry.h)> +\#include <[model_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/model_parser_registry.h)> ```c++ #define REG_MODEL_PARSER(fmk, parserCreator) @@ -153,15 +153,15 @@ static ModelParser *GetModelParser(FmkType fmk) - 参数 - - `fmk`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_converter.html#fmktype)说明。 + - `fmk`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#fmktype)说明。 - `creator`: ModelParserCreator类型的函数指针, 具体见[ModelParserCreator](#modelparsercreator)说明。 -> 用户自定义的ModelParser,框架类型必须满足设定支持的框架类型[FmkType](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_converter.html#fmktype)。 +> 用户自定义的ModelParser,框架类型必须满足设定支持的框架类型[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_converter.html#fmktype)。 ## PassBase -\#include <[pass_base.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/pass_base.h)> +\#include <[pass_base.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/pass_base.h)> PassBase定义了图优化的基类,以供用户继承并自定义图优化算法。 @@ -201,7 +201,7 @@ virtual bool Execute(const api::FuncGraphPtr &func_graph) = 0; ## PassBasePtr -\#include <[pass_base.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/pass_base.h)> +\#include <[pass_base.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/pass_base.h)> PassBase类的共享智能指针类型。 @@ -211,7 +211,7 @@ using PassBasePtr = std::shared_ptr ## PassPosition -\#include <[pass_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/pass_registry.h)> +\#include <[pass_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/pass_registry.h)> **enum**类型变量,定义扩展Pass的运行位置。 @@ -224,7 +224,7 @@ enum PassPosition { ## PassRegistry -\#include <[pass_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/pass_registry.h)> +\#include <[pass_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/pass_registry.h)> PassRegistry类用于注册及获取Pass类实例。 @@ -290,7 +290,7 @@ static PassBasePtr GetPassFromStoreRoom(const std::string &pass_name) ## REG_PASS -\#include <[pass_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/pass_registry.h)> +\#include <[pass_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/pass_registry.h)> ```c++ #define REG_PASS(name, pass) @@ -306,7 +306,7 @@ static PassBasePtr GetPassFromStoreRoom(const std::string &pass_name) ## REG_SCHEDULED_PASS -\#include <[pass_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/pass_registry.h)> +\#include <[pass_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/pass_registry.h)> ```c++ #define REG_SCHEDULED_PASS(position, names) @@ -322,7 +322,7 @@ static PassBasePtr GetPassFromStoreRoom(const std::string &pass_name) > MindSpore Lite开放了部分内置Pass,请见以下说明。用户可以在`names`参数中添加内置Pass的命名标识,以在指定运行处调用内置Pass。 > -> - `ConstFoldPass`: 将输入均是常量的节点进行离线计算,导出的模型将不含该节点。特别地,针对shape算子,在[inputShape](https://www.mindspore.cn/lite/docs/zh-CN/master/converter/converter_tool.html#参数说明)给定的情形下,也会触发预计算。 +> - `ConstFoldPass`: 将输入均是常量的节点进行离线计算,导出的模型将不含该节点。特别地,针对shape算子,在[inputShape](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.0/converter/converter_tool.html#参数说明)给定的情形下,也会触发预计算。 > - `DumpGraph`: 导出当前状态下的模型。请确保当前模型为NHWC或者NCHW格式的模型,例如卷积算子等。 > - `ToNCHWFormat`: 将当前状态下的模型转换为NCHW的格式,例如,四维的图输入、卷积算子等。 > - `ToNHWCFormat`: 将当前状态下的模型转换为NHWC的格式,例如,四维的图输入、卷积算子等。 @@ -334,7 +334,7 @@ static PassBasePtr GetPassFromStoreRoom(const std::string &pass_name) ## KernelDesc -\#include <[registry/register_kernel.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/register_kernel.h)> +\#include <[registry/register_kernel.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/register_kernel.h)> **struct**类型结构体,定义扩展kernel的基本属性。 @@ -349,7 +349,7 @@ struct KernelDesc { ## RegisterKernel -\#include <[registry/register_kernel.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/register_kernel.h)> +\#include <[registry/register_kernel.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/register_kernel.h)> ### CreateKernel @@ -363,13 +363,13 @@ using CreateKernel = std::function( - 参数 - - `inputs`: 算子输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#mstensor)。 + - `inputs`: 算子输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#mstensor)。 - - `outputs`: 算子输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#mstensor)。 + - `outputs`: 算子输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#mstensor)。 - `primitive`: 算子经由flatbuffers反序化为Primitive后的结果。 - - `ctx`: 算子的上下文[Context](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#context)。 + - `ctx`: 算子的上下文[Context](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#context)。 ### 公有成员函数 @@ -387,9 +387,9 @@ static Status RegKernel(const std::string &arch, const std::string &provider, Da - `provider`: 生产商名,由用户自定义。 - - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_datatype.html)。 + - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_datatype.html)。 - - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 + - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 - `creator`: 创建算子的函数指针,具体见[CreateKernel](#createkernel)的说明。 @@ -407,7 +407,7 @@ Custom算子注册。 - `provider`: 生产商名,由用户自定义。 - - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_datatype.html)。 + - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_datatype.html)。 - `type`: 算子类型,由用户自定义,确保唯一即可。 @@ -429,7 +429,7 @@ static CreateKernel GetCreator(const schema::Primitive *primitive, KernelDesc *d ## KernelReg -\#include <[registry/register_kernel.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/register_kernel.h)> +\#include <[registry/register_kernel.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/register_kernel.h)> ### ~KernelReg @@ -453,9 +453,9 @@ KernelReg(const std::string &arch, const std::string &provider, DataType data_ty - `provider`: 生产商名,由用户自定义。 - - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_datatype.html)。 + - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_datatype.html)。 - - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 + - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 - `creator`: 创建算子的函数指针,具体见[CreateKernel](#createkernel)的说明。 @@ -471,7 +471,7 @@ KernelReg(const std::string &arch, const std::string &provider, DataType data_ty - `provider`: 生产商名,由用户自定义。 - - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_datatype.html)。 + - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_datatype.html)。 - `op_type`: 算子类型,由用户自定义,确保唯一即可。 @@ -491,9 +491,9 @@ KernelReg(const std::string &arch, const std::string &provider, DataType data_ty - `provider`: 生产商名,由用户自定义。 - - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_datatype.html)。 + - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_datatype.html)。 - - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 + - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 - `creator`: 创建算子的函数指针,具体见[CreateKernel](#createkernel)的说明。 @@ -511,7 +511,7 @@ KernelReg(const std::string &arch, const std::string &provider, DataType data_ty - `provider`: 生产商名,由用户自定义。 - - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_datatype.html)。 + - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_datatype.html)。 - `op_type`: 算子类型,由用户自定义,确保唯一即可。 @@ -519,7 +519,7 @@ KernelReg(const std::string &arch, const std::string &provider, DataType data_ty ## KernelInterfaceCreator -\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/register_kernel_interface.h)> +\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/register_kernel_interface.h)> 定义创建算子的函数指针类型。 @@ -529,7 +529,7 @@ using KernelInterfaceCreator = std::function +\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/register_kernel_interface.h)> 算子扩展能力注册实现类。 @@ -563,7 +563,7 @@ static Status Reg(const std::string &provider, int op_type, const KernelInterfac - `provider`: 生产商,由用户自定义。 - - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 + - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 - `creator`: KernelInterface的创建函数,详细见[KernelInterfaceCreator](#kernelinterfacecreator)的说明。 @@ -585,7 +585,7 @@ static std::shared_ptr GetKernelInterface(const std::st ## KernelInterfaceReg -\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/register_kernel_interface.h)> +\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/register_kernel_interface.h)> 算子扩展能力注册构造类。 @@ -601,7 +601,7 @@ KernelInterfaceReg(const std::string &provider, int op_type, const KernelInterfa - `provider`: 生产商,由用户自定义。 - - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 + - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 - `creator`: KernelInterface的创建函数,详细见[KernelInterfaceCreator](#kernelinterfacecreator)的说明。 @@ -621,7 +621,7 @@ KernelInterfaceReg(const std::string &provider, const std::string &op_type, cons ## REGISTER_KERNEL_INTERFACE -\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/register_kernel_interface.h)> +\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/register_kernel_interface.h)> 注册KernelInterface的实现。 @@ -633,13 +633,13 @@ KernelInterfaceReg(const std::string &provider, const std::string &op_type, cons - `provider`: 生产商,由用户自定义。 - - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 + - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 - `creator`: 创建KernelInterface的函数指针,具体见[KernelInterfaceCreator](#kernelinterfacecreator)的说明。 ## REGISTER_CUSTOM_KERNEL_INTERFACE -\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/register_kernel_interface.h)> +\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/register_kernel_interface.h)> 注册Custom算子对应的KernelInterface实现。 diff --git a/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry_opencl.md b/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry_opencl.md index 7e61d3ee5ed58ce9ddfa8bfbd8e23cf6594cda42..041feb9f5b02343be4f5d1c6c630f8091123a499 100644 --- a/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry_opencl.md +++ b/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry_opencl.md @@ -1,6 +1,6 @@ # mindspore::registry::opencl -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry_opencl.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry_opencl.md) ## 接口汇总 @@ -10,7 +10,7 @@ ## OpenCLRuntimeWrapper -\#include <[include/registry/opencl_runtime_wrapper.h](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/include/registry/opencl_runtime_wrapper.h)> +\#include <[include/registry/opencl_runtime_wrapper.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/include/registry/opencl_runtime_wrapper.h)> OpenCLRuntimeWrapper类包装了内部OpenCL的相关接口,用于支持南向GPU算子的开发。 @@ -134,7 +134,7 @@ Status SyncCommandQueue() std::shared_ptr GetAllocator() ``` -获取GPU内存分配器的智能指针。通过[Allocator接口](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html),可申请GPU内存,用于OpenCL内核的运算。 +获取GPU内存分配器的智能指针。通过[Allocator接口](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html),可申请GPU内存,用于OpenCL内核的运算。 #### MapBuffer diff --git a/docs/lite/api/source_zh_cn/api_java/ascend_device_info.md b/docs/lite/api/source_zh_cn/api_java/ascend_device_info.md index 8edadda95256ee45504abb8711fafc221f5cd37a..30be71ab32e151639e121d511ed06b608fc98eef 100644 --- a/docs/lite/api/source_zh_cn/api_java/ascend_device_info.md +++ b/docs/lite/api/source_zh_cn/api_java/ascend_device_info.md @@ -1,6 +1,6 @@ # AscendDeviceInfo -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_java/ascend_device_info.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_java/ascend_device_info.md) ```java import com.mindspore.config.AscendDeviceInfo; diff --git a/docs/lite/api/source_zh_cn/api_java/class_list.md b/docs/lite/api/source_zh_cn/api_java/class_list.md index 6137934998551407823cb15d7fff2f07254e6fd3..48e5dac03332d217bf4793446780e9397725b439 100644 --- a/docs/lite/api/source_zh_cn/api_java/class_list.md +++ b/docs/lite/api/source_zh_cn/api_java/class_list.md @@ -1,20 +1,20 @@ # 类列表 -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_java/class_list.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_java/class_list.md) | 包 | 类 | 描述 | 云侧推理是否支持 | 端侧推理是否支持 | | ------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |--------|--------| -| com.mindspore | [Model](https://www.mindspore.cn/lite/api/zh-CN/master/api_java/model.html) | Model定义了MindSpore中的模型,用于计算图的编译和执行。 | √ | √ | -| com.mindspore.config | [MSContext](https://www.mindspore.cn/lite/api/zh-CN/master/api_java/mscontext.html) | MSContext用于保存执行期间的上下文。 | √ | √ | -| com.mindspore | [MSTensor](https://www.mindspore.cn/lite/api/zh-CN/master/api_java/mstensor.html) | MSTensor定义了MindSpore中的张量。 | √ | √ | -| com.mindspore | [ModelParallelRunner](https://www.mindspore.cn/lite/api/zh-CN/master/api_java/model_parallel_runner.html) | 定义了MindSpore Lite并发推理。 | √ | ✕ | -| com.mindspore.config | [RunnerConfig](https://www.mindspore.cn/lite/api/zh-CN/master/api_java/runner_config.html) | RunnerConfig 定义并发推理的配置参数。 | √ | ✕ | -| com.mindspore | [Graph](https://www.mindspore.cn/lite/api/zh-CN/master/api_java/graph.html) | Model定义了MindSpore中的计算图。 | ✕ | √ | -| com.mindspore.config | [CpuBindMode](https://www.mindspore.cn/lite/api/zh-CN/master/api_java/mscontext.html#cpubindmode) | CpuBindMode定义了CPU绑定模式。 | √ | √ | -| com.mindspore.config | [DeviceType](https://www.mindspore.cn/lite/api/zh-CN/master/api_java/mscontext.html#devicetype) | DeviceType定义了后端设备类型。 | √ | √ | -| com.mindspore.config | [DataType](https://www.mindspore.cn/lite/api/zh-CN/master/api_java/mstensor.html#datatype) | DataType定义了所支持的数据类型。 | √ | √ | -| com.mindspore.config | [Version](https://www.mindspore.cn/lite/api/zh-CN/master/api_java/version.html) | Version用于获取MindSpore的版本信息。 | √ | √ | -| com.mindspore.config | [ModelType](https://www.mindspore.cn/lite/api/zh-CN/master/api_java/model.html#modeltype) | ModelType 定义了模型文件的类型。 | √ | √ | -| com.mindspore.config | [AscendDeviceInfo](https://www.mindspore.cn/lite/api/zh-CN/master/api_java/ascend_device_info.html) | MindSpore Lite用于昇腾硬件推理的配置参数。 | √ | ✕ | -| com.mindspore.config | [TrainCfg](https://www.mindspore.cn/lite/api/zh-CN/master/api_java/train_cfg.html) | 用于端上模型训练的配置参数。 | ✕ | √ | +| com.mindspore | [Model](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/model.html) | Model定义了MindSpore中的模型,用于计算图的编译和执行。 | √ | √ | +| com.mindspore.config | [MSContext](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/mscontext.html) | MSContext用于保存执行期间的上下文。 | √ | √ | +| com.mindspore | [MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/mstensor.html) | MSTensor定义了MindSpore中的张量。 | √ | √ | +| com.mindspore | [ModelParallelRunner](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/model_parallel_runner.html) | 定义了MindSpore Lite并发推理。 | √ | ✕ | +| com.mindspore.config | [RunnerConfig](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/runner_config.html) | RunnerConfig 定义并发推理的配置参数。 | √ | ✕ | +| com.mindspore | [Graph](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/graph.html) | Model定义了MindSpore中的计算图。 | ✕ | √ | +| com.mindspore.config | [CpuBindMode](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/mscontext.html#cpubindmode) | CpuBindMode定义了CPU绑定模式。 | √ | √ | +| com.mindspore.config | [DeviceType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/mscontext.html#devicetype) | DeviceType定义了后端设备类型。 | √ | √ | +| com.mindspore.config | [DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/mstensor.html#datatype) | DataType定义了所支持的数据类型。 | √ | √ | +| com.mindspore.config | [Version](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/version.html) | Version用于获取MindSpore的版本信息。 | √ | √ | +| com.mindspore.config | [ModelType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/model.html#modeltype) | ModelType 定义了模型文件的类型。 | √ | √ | +| com.mindspore.config | [AscendDeviceInfo](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/ascend_device_info.html) | MindSpore Lite用于昇腾硬件推理的配置参数。 | √ | ✕ | +| com.mindspore.config | [TrainCfg](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_java/train_cfg.html) | 用于端上模型训练的配置参数。 | ✕ | √ | diff --git a/docs/lite/api/source_zh_cn/api_java/graph.md b/docs/lite/api/source_zh_cn/api_java/graph.md index d8d6f5fbcf5fef2def1733023df3ae4256dbde2e..3629e9198ef485284a90dd2db2f143d4be3e9f8e 100644 --- a/docs/lite/api/source_zh_cn/api_java/graph.md +++ b/docs/lite/api/source_zh_cn/api_java/graph.md @@ -1,6 +1,6 @@ # Graph -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_java/graph.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_java/graph.md) ```java import com.mindspore.Graph; diff --git a/docs/lite/api/source_zh_cn/api_java/lite_java_example.rst b/docs/lite/api/source_zh_cn/api_java/lite_java_example.rst index 680868ab2f7f22a79569496cd6c87be38d663859..5d3355e95e7b82c1d63a95ae8dfd0defcaa05f71 100644 --- a/docs/lite/api/source_zh_cn/api_java/lite_java_example.rst +++ b/docs/lite/api/source_zh_cn/api_java/lite_java_example.rst @@ -4,6 +4,6 @@ .. toctree:: :maxdepth: 1 - 极简Demo↗ - 基于Java接口的Android应用开发↗ - 高阶用法↗ \ No newline at end of file + 极简Demo↗ + 基于Java接口的Android应用开发↗ + 高阶用法↗ \ No newline at end of file diff --git a/docs/lite/api/source_zh_cn/api_java/model.md b/docs/lite/api/source_zh_cn/api_java/model.md index e1db2a369b7b10b8536f6b2a7549cdb7dd1f226e..8818d8b710fb905ec6c24a2e17ec7b532dab0a3c 100644 --- a/docs/lite/api/source_zh_cn/api_java/model.md +++ b/docs/lite/api/source_zh_cn/api_java/model.md @@ -1,6 +1,6 @@ # Model -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_java/model.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_java/model.md) ```java import com.mindspore.Model; diff --git a/docs/lite/api/source_zh_cn/api_java/model_parallel_runner.md b/docs/lite/api/source_zh_cn/api_java/model_parallel_runner.md index 44f7a1d734a4325ab169c60b578fe8f376934524..eb5b9eac7933aa505f316bc4c45eb34a5adfd6f5 100644 --- a/docs/lite/api/source_zh_cn/api_java/model_parallel_runner.md +++ b/docs/lite/api/source_zh_cn/api_java/model_parallel_runner.md @@ -1,6 +1,6 @@ # ModelParallelRunner -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_java/model_parallel_runner.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_java/model_parallel_runner.md) ```java import com.mindspore.config.RunnerConfig; diff --git a/docs/lite/api/source_zh_cn/api_java/mscontext.md b/docs/lite/api/source_zh_cn/api_java/mscontext.md index 8c412ce4141965fe856d4c2d245749d2dab93fa1..9acec5ca8b9fc1c7813d7dbe00aa49636b8b9052 100644 --- a/docs/lite/api/source_zh_cn/api_java/mscontext.md +++ b/docs/lite/api/source_zh_cn/api_java/mscontext.md @@ -1,6 +1,6 @@ # MSContext -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_java/mscontext.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_java/mscontext.md) ```java import com.mindspore.config.MSContext; @@ -54,7 +54,7 @@ public boolean init(int threadNum, int cpuBindMode) - 参数 - `threadNum`: 线程数。 - - `cpuBindMode`: CPU绑定模式,`cpuBindMode`在[com.mindspore.config.CpuBindMode](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/java/src/main/java/com/mindspore/config/CpuBindMode.java)中定义。 + - `cpuBindMode`: CPU绑定模式,`cpuBindMode`在[com.mindspore.config.CpuBindMode](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/java/src/main/java/com/mindspore/config/CpuBindMode.java)中定义。 - 返回值 @@ -69,7 +69,7 @@ public boolean init(int threadNum, int cpuBindMode, boolean isEnableParallel) - 参数 - `threadNum`: 线程数。 - - `cpuBindMode`: CPU绑定模式,`cpuBindMode`在[com.mindspore.config.CpuBindMode](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/java/src/main/java/com/mindspore/config/CpuBindMode.java)中定义。 + - `cpuBindMode`: CPU绑定模式,`cpuBindMode`在[com.mindspore.config.CpuBindMode](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/java/src/main/java/com/mindspore/config/CpuBindMode.java)中定义。 - `isEnableParallel`: 是否开启异构并行。 - 返回值 @@ -86,7 +86,7 @@ public boolean addDeviceInfo(int deviceType, boolean isEnableFloat16) - 参数 - - `deviceType`: 设备类型,`deviceType`在[com.mindspore.config.DeviceType](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/java/src/main/java/com/mindspore/config/DeviceType.java)中定义。 + - `deviceType`: 设备类型,`deviceType`在[com.mindspore.config.DeviceType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/java/src/main/java/com/mindspore/config/DeviceType.java)中定义。 - `isEnableFloat16`: 是否开启fp16。 - 返回值 @@ -101,7 +101,7 @@ public boolean addDeviceInfo(int deviceType, boolean isEnableFloat16, int npuFre - 参数 - - `deviceType`: 设备类型,`deviceType`在[com.mindspore.config.DeviceType](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/java/src/main/java/com/mindspore/config/DeviceType.java)中定义。 + - `deviceType`: 设备类型,`deviceType`在[com.mindspore.config.DeviceType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/java/src/main/java/com/mindspore/config/DeviceType.java)中定义。 - `isEnableFloat16`: 是否开启fp16。 - `npuFreq`: NPU运行频率,仅当deviceType为npu才需要。 diff --git a/docs/lite/api/source_zh_cn/api_java/mstensor.md b/docs/lite/api/source_zh_cn/api_java/mstensor.md index c06b19a43b3e11adef0cc96be06e85845a29df1b..974b344c57d6486f11b2acb3579b9c59193c96f8 100644 --- a/docs/lite/api/source_zh_cn/api_java/mstensor.md +++ b/docs/lite/api/source_zh_cn/api_java/mstensor.md @@ -1,6 +1,6 @@ # MSTensor -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_java/mstensor.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_java/mstensor.md) ```java import com.mindspore.MSTensor; @@ -86,7 +86,7 @@ public int[] getShape() public int getDataType() ``` -DataType在[com.mindspore.DataType](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/java/src/main/java/com/mindspore/config/DataType.java)中定义。 +DataType在[com.mindspore.DataType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/java/src/main/java/com/mindspore/config/DataType.java)中定义。 - 返回值 diff --git a/docs/lite/api/source_zh_cn/api_java/runner_config.md b/docs/lite/api/source_zh_cn/api_java/runner_config.md index ef746234e8e17b9c473c14302df1fdfb84c77abd..87f79646e92a6e197d2e254da802a7543b8a4d62 100644 --- a/docs/lite/api/source_zh_cn/api_java/runner_config.md +++ b/docs/lite/api/source_zh_cn/api_java/runner_config.md @@ -1,6 +1,6 @@ # RunnerConfig -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_java/runner_config.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_java/runner_config.md) RunnerConfig定义了MindSpore Lite并发推理的配置参数。 diff --git a/docs/lite/api/source_zh_cn/api_java/train_cfg.md b/docs/lite/api/source_zh_cn/api_java/train_cfg.md index f9ea584679c528080a9ad6f16440103b052e8db1..0ed803ac1c6552a958f3f8ab426af74808c0af40 100644 --- a/docs/lite/api/source_zh_cn/api_java/train_cfg.md +++ b/docs/lite/api/source_zh_cn/api_java/train_cfg.md @@ -1,6 +1,6 @@ # TrainCfg -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_java/train_cfg.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_java/train_cfg.md) ```java import com.mindspore.config.TrainCfg; diff --git a/docs/lite/api/source_zh_cn/api_java/version.md b/docs/lite/api/source_zh_cn/api_java/version.md index 150a616b4ed6eed4a55cb04b1a8e77cc71919a8b..e66db3d3add8e929221aaecee4929764e396e54a 100644 --- a/docs/lite/api/source_zh_cn/api_java/version.md +++ b/docs/lite/api/source_zh_cn/api_java/version.md @@ -1,6 +1,6 @@ # Version -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/api/source_zh_cn/api_java/version.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/api/source_zh_cn/api_java/version.md) ```java import com.mindspore.config.Version; diff --git a/docs/lite/api/source_zh_cn/index.rst b/docs/lite/api/source_zh_cn/index.rst index c7e8c326ff732a4c017f39948e60086a4b9187bd..fdf99b3825e3452f0dc5376ade59c84d0ecee355 100644 --- a/docs/lite/api/source_zh_cn/index.rst +++ b/docs/lite/api/source_zh_cn/index.rst @@ -12,21 +12,21 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 类名 | 接口说明 | C++ 接口 | Python 接口 | +=====================+=========================================================================================================+==========================================================================================================================================================================================================================+============================================================================================================================================================================================================================================================================================================================================================================+ -| Context | 设置运行时的线程数 | void SetThreadNum(int32_t thread_num) | `Context.cpu.thread_num `__ | +| Context | 设置运行时的线程数 | void SetThreadNum(int32_t thread_num) | `Context.cpu.thread_num `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | 获取当前线程数设置 | int32_t GetThreadNum() const | `Context.cpu.thread_num `__ | +| Context | 获取当前线程数设置 | int32_t GetThreadNum() const | `Context.cpu.thread_num `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | 设置运行时的算子并行推理数目 | void SetInterOpParallelNum(int32_t parallel_num) | `Context.cpu.inter_op_parallel_num `__ | +| Context | 设置运行时的算子并行推理数目 | void SetInterOpParallelNum(int32_t parallel_num) | `Context.cpu.inter_op_parallel_num `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | 获取当前算子并行数设置 | int32_t GetInterOpParallelNum() const | `Context.cpu.inter_op_parallel_num `__ | +| Context | 获取当前算子并行数设置 | int32_t GetInterOpParallelNum() const | `Context.cpu.inter_op_parallel_num `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | 设置运行时的CPU绑核策略 | void SetThreadAffinity(int mode) | `Context.cpu.thread_affinity_mode `__ | +| Context | 设置运行时的CPU绑核策略 | void SetThreadAffinity(int mode) | `Context.cpu.thread_affinity_mode `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | 获取当前CPU绑核策略 | int GetThreadAffinityMode() const | `Context.cpu.thread_affinity_mode `__ | +| Context | 获取当前CPU绑核策略 | int GetThreadAffinityMode() const | `Context.cpu.thread_affinity_mode `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | 设置运行时的CPU绑核列表 | void SetThreadAffinity(const std::vector &core_list) | `Context.cpu.thread_affinity_core_list `__ | +| Context | 设置运行时的CPU绑核列表 | void SetThreadAffinity(const std::vector &core_list) | `Context.cpu.thread_affinity_core_list `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | 获取当前CPU绑核列表 | std::vector GetThreadAffinityCoreList() const | `Context.cpu.thread_affinity_core_list `__ | +| Context | 获取当前CPU绑核列表 | std::vector GetThreadAffinityCoreList() const | `Context.cpu.thread_affinity_core_list `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 设置运行时是否支持并行 | void SetEnableParallel(bool is_parallel) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -44,7 +44,7 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 获取当前配置中,量化模型的运行模式 | bool GetMultiModalHW() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | 修改该context下的DeviceInfoContext数组 | std::vector> &MutableDeviceInfo() | 封装在 `Context.target `__ | +| Context | 修改该context下的DeviceInfoContext数组 | std::vector> &MutableDeviceInfo() | 封装在 `Context.target `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | DeviceInfoContext | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -62,29 +62,29 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | DeviceInfoContext | 获取内存管理器 | std::shared_ptr GetAllocator() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| CPUDeviceInfo | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | `context.cpu `__ | +| CPUDeviceInfo | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | `context.cpu `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| CPUDeviceInfo | 设置是否以FP16精度进行推理 | void SetEnableFP16(bool is_fp16) | `Context.cpu.precision_mode `__ | +| CPUDeviceInfo | 设置是否以FP16精度进行推理 | void SetEnableFP16(bool is_fp16) | `Context.cpu.precision_mode `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| CPUDeviceInfo | 获取当前是否以FP16精度进行推理 | bool GetEnableFP16() const | `Context.cpu.precision_mode `__ | +| CPUDeviceInfo | 获取当前是否以FP16精度进行推理 | bool GetEnableFP16() const | `Context.cpu.precision_mode `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | `Context.gpu `__ | +| GPUDeviceInfo | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | `Context.gpu `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | 设置设备ID | void SetDeviceID(uint32_t device_id) | `Context.gpu.device_id `__ | +| GPUDeviceInfo | 设置设备ID | void SetDeviceID(uint32_t device_id) | `Context.gpu.device_id `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | 获取设备ID | uint32_t GetDeviceID() const | `Context.gpu.device_id `__ | +| GPUDeviceInfo | 获取设备ID | uint32_t GetDeviceID() const | `Context.gpu.device_id `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | 获取当前运行的RANK ID | int GetRankID() const | `Context.gpu.rank_id `__ | +| GPUDeviceInfo | 获取当前运行的RANK ID | int GetRankID() const | `Context.gpu.rank_id `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | 获取当前运行的GROUP SIZE | int GetGroupSize() const | `Context.gpu.group_size `__ | +| GPUDeviceInfo | 获取当前运行的GROUP SIZE | int GetGroupSize() const | `Context.gpu.group_size `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 设置推理时算子精度 | void SetPrecisionMode(const std::string &precision_mode) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 获取推理时算子精度 | std::string GetPrecisionMode() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | 设置是否以FP16精度进行推理 | void SetEnableFP16(bool is_fp16) | `Context.gpu.precision_mode `__ | +| GPUDeviceInfo | 设置是否以FP16精度进行推理 | void SetEnableFP16(bool is_fp16) | `Context.gpu.precision_mode `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | 获取是否以FP16精度进行推理 | bool GetEnableFP16() const | `Context.gpu.precision_mode `__ | +| GPUDeviceInfo | 获取是否以FP16精度进行推理 | bool GetEnableFP16() const | `Context.gpu.precision_mode `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 设置是否绑定OpenGL纹理数据 | void SetEnableGLTexture(bool is_enable_gl_texture) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -98,11 +98,11 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 获取当前OpenGL EGLDisplay | void \*GetGLDisplay() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | `Context.ascend `__ | +| AscendDeviceInfo | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | `Context.ascend `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | 设置设备ID | void SetDeviceID(uint32_t device_id) | `Context.ascend.device_id `__ | +| AscendDeviceInfo | 设置设备ID | void SetDeviceID(uint32_t device_id) | `Context.ascend.device_id `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | 获取设备ID | uint32_t GetDeviceID() const | `Context.ascend.device_id `__ | +| AscendDeviceInfo | 获取设备ID | uint32_t GetDeviceID() const | `Context.ascend.device_id `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 设置AIPP配置文件路径 | void SetInsertOpConfigPath(const std::string &cfg_path) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -132,9 +132,9 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 获取模型输出type | enum DataType GetOutputType() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | 设置模型精度模式 | void SetPrecisionMode(const std::string &precision_mode) | `Context.ascend.precision_mode `__ | +| AscendDeviceInfo | 设置模型精度模式 | void SetPrecisionMode(const std::string &precision_mode) | `Context.ascend.precision_mode `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | 获取模型精度模式 | std::string GetPrecisionMode() const | `Context.ascend.precision_mode `__ | +| AscendDeviceInfo | 获取模型精度模式 | std::string GetPrecisionMode() const | `Context.ascend.precision_mode `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 设置算子实现方式 | void SetOpSelectImplMode(const std::string &op_select_impl_mode) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -160,7 +160,7 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 从内存缓冲区加载模型,并将模型编译至可在Device上运行的状态 | Status Build(const void \*model_data, size_t data_size, ModelType model_type, const std::shared_ptr &model_context = nullptr) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | 从内存缓冲区加载模型,并将模型编译至可在Device上运行的状态 | Status Build(const std::string &model_path, ModelType model_type, const std::shared_ptr &model_context = nullptr) | `Model.build_from_file `__ | +| Model | 从内存缓冲区加载模型,并将模型编译至可在Device上运行的状态 | Status Build(const std::string &model_path, ModelType model_type, const std::shared_ptr &model_context = nullptr) | `Model.build_from_file `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 根据路径读取加载模型,并将模型编译至可在Device上运行的状态 | Status Build(const void \*model_data, size_t data_size, ModelType model_type, const std::shared_ptr &model_context, const Key &dec_key, const std::string &dec_mode, const std::string &cropto_lib_path) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -172,11 +172,11 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 构建一个迁移学习模型,其中主干权重是固定的,头部权重是可训练的 | Status BuildTransferLearning(GraphCell backbone, GraphCell head, const std::shared_ptr &context, const std::shared_ptr &train_cfg = nullptr) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | 调整已编译模型的输入张量形状 | Status Resize(const std::vector &inputs, const std::vector > &dims) | `Model.resize `__ | +| Model | 调整已编译模型的输入张量形状 | Status Resize(const std::vector &inputs, const std::vector > &dims) | `Model.resize `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 更新模型的权重Tensor的大小和内容 | Status UpdateWeights(const std::vector &new_weights) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | 推理模型 | Status Predict(const std::vector &inputs, std::vector \*outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.predict `__ | +| Model | 推理模型 | Status Predict(const std::vector &inputs, std::vector \*outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.predict `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 仅带callback的推理模型 | Status Predict(const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -188,11 +188,11 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 检查模型是否配置了数据预处理 | bool HasPreprocess() | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | 根据路径读取配置文件 | Status LoadConfig(const std::string &config_path) | 封装在 `Model.build_from_file `__ 方法的 `config_path` 参数中 | +| Model | 根据路径读取配置文件 | Status LoadConfig(const std::string &config_path) | 封装在 `Model.build_from_file `__ 方法的 `config_path` 参数中 | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 刷新配置 | Status UpdateConfig(const std::string §ion, const std::pair &config) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | 获取模型所有输入张量 | std::vector GetInputs() | `Model.get_inputs `__ | +| Model | 获取模型所有输入张量 | std::vector GetInputs() | `Model.get_inputs `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取模型指定名字的输入张量 | MSTensor GetInputByTensorName(const std::string &tensor_name) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -220,7 +220,7 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取训练指标参数 | std::vector GetMetrics() | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | 获取模型所有输出张量 | std::vector GetOutputs() | 封装在 `Model.predict `__ 的返回值 | +| Model | 获取模型所有输出张量 | std::vector GetOutputs() | 封装在 `Model.predict `__ 的返回值 | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取模型所有输出张量的名字 | std::vector GetOutputTensorNames() | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -240,33 +240,33 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 检查设备是否支持该模型 | static bool CheckModelSupport(enum DeviceType device_type, ModelType model_type) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | 设置RunnerConfig的worker的个数 | void SetWorkersNum(int32_t workers_num) | `Context.parallel.workers_num `__ | +| RunnerConfig | 设置RunnerConfig的worker的个数 | void SetWorkersNum(int32_t workers_num) | `Context.parallel.workers_num `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | 获取RunnerConfig的worker的个数 | int32_t GetWorkersNum() const | `Context.parallel.workers_num `__ | +| RunnerConfig | 获取RunnerConfig的worker的个数 | int32_t GetWorkersNum() const | `Context.parallel.workers_num `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | 设置RunnerConfig的context参数 | void SetContext(const std::shared_ptr &context) | 封装在 `Context.parallel `__ | +| RunnerConfig | 设置RunnerConfig的context参数 | void SetContext(const std::shared_ptr &context) | 封装在 `Context.parallel `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | 获取RunnerConfig配置的上下文参数 | std::shared_ptr GetContext() const | 封装在 `Context.parallel `__ | +| RunnerConfig | 获取RunnerConfig配置的上下文参数 | std::shared_ptr GetContext() const | 封装在 `Context.parallel `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | 设置RunnerConfig的配置参数 | void SetConfigInfo(const std::string §ion, const std::map &config) | `Context.parallel.config_info `__ | +| RunnerConfig | 设置RunnerConfig的配置参数 | void SetConfigInfo(const std::string §ion, const std::map &config) | `Context.parallel.config_info `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | 获取RunnerConfig配置参数信息 | std::map> GetConfigInfo() const | `Context.parallel.config_info `__ | +| RunnerConfig | 获取RunnerConfig配置参数信息 | std::map> GetConfigInfo() const | `Context.parallel.config_info `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | 设置RunnerConfig中的配置文件路径 | void SetConfigPath(const std::string &config_path) | `Context.parallel.config_path `__ | +| RunnerConfig | 设置RunnerConfig中的配置文件路径 | void SetConfigPath(const std::string &config_path) | `Context.parallel.config_path `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | 获取RunnerConfig中的配置文件的路径 | std::string GetConfigPath() const | `Context.parallel.config_path `__ | +| RunnerConfig | 获取RunnerConfig中的配置文件的路径 | std::string GetConfigPath() const | `Context.parallel.config_path `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelParallelRunner | 根据路径读取加载模型,生成一个或者多个模型,并将所有模型编译至可在Device上运行的状态 | Status Init(const std::string &model_path, const std::shared_ptr &runner_config = nullptr) | `Model.parallel_runner.build_from_file `__ | +| ModelParallelRunner | 根据路径读取加载模型,生成一个或者多个模型,并将所有模型编译至可在Device上运行的状态 | Status Init(const std::string &model_path, const std::shared_ptr &runner_config = nullptr) | `Model.parallel_runner.build_from_file `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelParallelRunner | 根据模文件数据,生成一个或者多个模型,并将所有模型编译至可在Device上运行的状态 | Status Init(const void \*model_data, const size_t data_size, const std::shared_ptr &runner_config = nullptr) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelParallelRunner | 获取模型所有输入张量 | std::vector GetInputs() | `Model.parallel_runner.get_inputs `__ | +| ModelParallelRunner | 获取模型所有输入张量 | std::vector GetInputs() | `Model.parallel_runner.get_inputs `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelParallelRunner | 获取模型所有输出张量 | std::vector GetOutputs() | 封装在 `Model.parallel_runner.predict `__ 的返回值 | +| ModelParallelRunner | 获取模型所有输出张量 | std::vector GetOutputs() | 封装在 `Model.parallel_runner.predict `__ 的返回值 | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelParallelRunner | 并发推理模型 | Status Predict(const std::vector &inputs, std::vector \*outputs,const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.parallel_runner.predict `__ | +| ModelParallelRunner | 并发推理模型 | Status Predict(const std::vector &inputs, std::vector \*outputs,const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.parallel_runner.predict `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 创建一个MSTensor对象,其数据需复制后才能由Model访问 | static inline MSTensor \*CreateTensor(const std::string &name, DataType type, const std::vector &shape, const void \*data, size_t data_len) noexcept | `Tensor `__ | +| MSTensor | 创建一个MSTensor对象,其数据需复制后才能由Model访问 | static inline MSTensor \*CreateTensor(const std::string &name, DataType type, const std::vector &shape, const void \*data, size_t data_len) noexcept | `Tensor `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 创建一个MSTensor对象,其数据可以直接由Model访问 | static inline MSTensor \*CreateRefTensor(const std::string &name, DataType type, const std::vector &shape, const void \*data, size_t data_len, bool own_data = true) noexcept | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -280,19 +280,19 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 销毁一个由 `Clone` 、 `StringsToTensor` 、 `CreateRefTensor` 或 `CreateTensor` 所创建的对象 | static void DestroyTensorPtr(MSTensor \*tensor) noexcept | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 获取MSTensor的名字 | std::string Name() const | `Tensor.name `__ | +| MSTensor | 获取MSTensor的名字 | std::string Name() const | `Tensor.name `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 获取MSTensor的数据类型 | enum DataType DataType() const | `Tensor.dtype `__ | +| MSTensor | 获取MSTensor的数据类型 | enum DataType DataType() const | `Tensor.dtype `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 获取MSTensor的Shape | const std::vector &Shape() const | `Tensor.shape `__ | +| MSTensor | 获取MSTensor的Shape | const std::vector &Shape() const | `Tensor.shape `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 获取MSTensor的元素个数 | int64_t ElementNum() const | `Tensor.element_num `__ | +| MSTensor | 获取MSTensor的元素个数 | int64_t ElementNum() const | `Tensor.element_num `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 获取指向MSTensor中的数据拷贝的智能指针 | std::shared_ptr Data() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 获取MSTensor中的数据的指针 | void \*MutableData() | 封装在 `Tensor.get_data_to_numpy `__ 和 `Tensor.set_data_from_numpy `__ | +| MSTensor | 获取MSTensor中的数据的指针 | void \*MutableData() | 封装在 `Tensor.get_data_to_numpy `__ 和 `Tensor.set_data_from_numpy `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 获取MSTensor中的数据的以字节为单位的内存长度 | size_t DataSize() const | `Tensor.data_size `__ | +| MSTensor | 获取MSTensor中的数据的以字节为单位的内存长度 | size_t DataSize() const | `Tensor.data_size `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 判断MSTensor中的数据是否是常量数据 | bool IsConst() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -308,19 +308,19 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 判断MSTensor是否与另一个MSTensor不相等 | bool operator!=(const MSTensor &tensor) const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 设置MSTensor的Shape | void SetShape(const std::vector &shape) | `Tensor.shape `__ | +| MSTensor | 设置MSTensor的Shape | void SetShape(const std::vector &shape) | `Tensor.shape `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 设置MSTensor的DataType | void SetDataType(enum DataType data_type) | `Tensor.dtype `__ | +| MSTensor | 设置MSTensor的DataType | void SetDataType(enum DataType data_type) | `Tensor.dtype `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 设置MSTensor的名字 | void SetTensorName(const std::string &name) | `Tensor.name `__ | +| MSTensor | 设置MSTensor的名字 | void SetTensorName(const std::string &name) | `Tensor.name `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 设置MSTensor数据所属的内存池 | void SetAllocator(std::shared_ptr allocator) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 获取MSTensor数据所属的内存池 | std::shared_ptr allocator() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 设置MSTensor数据的format | void SetFormat(mindspore::Format format) | `Tensor.format `__ | +| MSTensor | 设置MSTensor数据的format | void SetFormat(mindspore::Format format) | `Tensor.format `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 获取MSTensor数据的format | mindspore::Format format() const | `Tensor.format `__ | +| MSTensor | 获取MSTensor数据的format | mindspore::Format format() const | `Tensor.format `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 设置指向MSTensor数据的指针 | void SetData(void \*data, bool own_data = true) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -332,15 +332,15 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 设置MSTensor的量化参数 | void SetQuantParams(std::vector quant_params) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelGroup | 构造ModelGroup对象,指示共享工作空间内存或共享权重内存,默认共享工作空间内存 | ModelGroup(ModelGroupFlag flags = ModelGroupFlag::kShareWorkspace) | `ModelGroup `__ | +| ModelGroup | 构造ModelGroup对象,指示共享工作空间内存或共享权重内存,默认共享工作空间内存 | ModelGroup(ModelGroupFlag flags = ModelGroupFlag::kShareWorkspace) | `ModelGroup `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelGroup | 共享权重内存时,添加需要共享权重内存的模型对象 | Status AddModel(const std::vector &model_list) | `ModelGroup.add_model `__ | +| ModelGroup | 共享权重内存时,添加需要共享权重内存的模型对象 | Status AddModel(const std::vector &model_list) | `ModelGroup.add_model `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelGroup | 共享工作空间内存时,添加需要共享工作空间内存的模型路径 | Status AddModel(const std::vector &model_path_list) | `ModelGroup.add_model `__ | +| ModelGroup | 共享工作空间内存时,添加需要共享工作空间内存的模型路径 | Status AddModel(const std::vector &model_path_list) | `ModelGroup.add_model `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelGroup | 共享工作空间内存时,添加需要共享工作空间内存的模型缓存 | Status AddModel(const std::vector> &model_buff_list) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelGroup | 共享工作空间内存时,计算最大的工作空间内存大小 | Status CalMaxSizeOfWorkspace(ModelType model_type, const std::shared_ptr &ms_context) | `ModelGroup.cal_max_size_of_workspace `__ | +| ModelGroup | 共享工作空间内存时,计算最大的工作空间内存大小 | Status CalMaxSizeOfWorkspace(ModelType model_type, const std::shared_ptr &ms_context) | `ModelGroup.cal_max_size_of_workspace `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/docs/lite/docs/source_en/advanced/image_processing.md b/docs/lite/docs/source_en/advanced/image_processing.md index 510024c56ded17a4a5ce05897b3ec20fb96c9579..bb883ea8338bc40614c89523c3415eeac3de90f9 100644 --- a/docs/lite/docs/source_en/advanced/image_processing.md +++ b/docs/lite/docs/source_en/advanced/image_processing.md @@ -1,6 +1,6 @@ # Data Preprocessing -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/docs/source_en/advanced/image_processing.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/advanced/image_processing.md) ## Overview @@ -15,7 +15,7 @@ The main purpose of image preprocessing is to eliminate irrelevant information i ## Initializing the Image -Here, the [InitFromPixel](https://www.mindspore.cn/lite/api/en/master/generate/function_mindspore_dataset_InitFromPixel-1.html) function in the `image_process.h` file is used to initialize the image. +Here, the [InitFromPixel](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/function_mindspore_dataset_InitFromPixel-1.html) function in the `image_process.h` file is used to initialize the image. ```cpp bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType data_type, int w, int h, LiteMat &m) @@ -38,7 +38,7 @@ The image processing operations here can be used in any combination according to ### Resizing Image -Here we use the [ResizeBilinear](https://www.mindspore.cn/lite/api/en/master/generate/function_mindspore_dataset_ResizeBilinear-1.html) function in `image_process.h` to resize the image through a bilinear algorithm. Currently, the supported data type is unit8, and the supported channels are 3 and 1. +Here we use the [ResizeBilinear](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/function_mindspore_dataset_ResizeBilinear-1.html) function in `image_process.h` to resize the image through a bilinear algorithm. Currently, the supported data type is unit8, and the supported channels are 3 and 1. ```cpp bool ResizeBilinear(const LiteMat &src, LiteMat &dst, int dst_w, int dst_h) @@ -60,7 +60,7 @@ ResizeBilinear(lite_mat_bgr, lite_mat_resize, 256, 256); ### Converting the Image Data Type -Here we use the [ConvertTo](https://www.mindspore.cn/lite/api/en/master/generate/function_mindspore_dataset_ConvertTo-1.html) function in `image_process.h` to convert the image data type. Currently, the conversion from uint8 to float is supported. +Here we use the [ConvertTo](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/function_mindspore_dataset_ConvertTo-1.html) function in `image_process.h` to convert the image data type. Currently, the conversion from uint8 to float is supported. ```cpp bool ConvertTo(const LiteMat &src, LiteMat &dst, double scale = 1.0) @@ -82,7 +82,7 @@ ConvertTo(lite_mat_bgr, lite_mat_convert_float); ### Cropping Image Data -Here we use the [Crop](https://www.mindspore.cn/lite/api/en/master/generate/function_mindspore_dataset_Crop-1.html) function in `image_process.h` to crop the image. Currently, channels 3 and 1 are supported. +Here we use the [Crop](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/function_mindspore_dataset_Crop-1.html) function in `image_process.h` to crop the image. Currently, channels 3 and 1 are supported. ```cpp bool Crop(const LiteMat &src, LiteMat &dst, int x, int y, int w, int h) @@ -104,7 +104,7 @@ Crop(lite_mat_bgr, lite_mat_cut, 16, 16, 224, 224); ### Normalizing Image Data -In order to eliminate the dimensional influence among the data indicators and solve the comparability problem among the data indicators through standardization processing is adopted, here is the use of the [SubStractMeanNormalize](https://www.mindspore.cn/lite/api/en/master/generate/function_mindspore_dataset_SubStractMeanNormalize-1.html) function in `image_process.h` to normalize the image data. +In order to eliminate the dimensional influence among the data indicators and solve the comparability problem among the data indicators through standardization processing is adopted, here is the use of the [SubStractMeanNormalize](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/function_mindspore_dataset_SubStractMeanNormalize-1.html) function in `image_process.h` to normalize the image data. ```cpp bool SubStractMeanNormalize(const LiteMat &src, LiteMat &dst, const std::vector &mean, const std::vector &std) diff --git a/docs/lite/docs/source_en/advanced/micro.md b/docs/lite/docs/source_en/advanced/micro.md index fae024ddc1171aa3f5562efaacb73f16355ffc75..add484e83c326618fc9228e03b9a695840f57e25 100644 --- a/docs/lite/docs/source_en/advanced/micro.md +++ b/docs/lite/docs/source_en/advanced/micro.md @@ -1,6 +1,6 @@ # Performing Inference or Training on MCU or Small Systems -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/docs/source_en/advanced/micro.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/advanced/micro.md) ## Overview @@ -18,7 +18,7 @@ Deploying a model for inference or training via the Micro involves the following ### Overview The Micro configuration item in the parameter configuration file is configured via the MindSpore Lite conversion tool `converter_lite`. -This chapter describes the functions related to code generation in the conversion tool. For details about how to use the conversion tool, see [Converting Models for Inference](https://www.mindspore.cn/lite/docs/en/master/converter/converter_tool.html). +This chapter describes the functions related to code generation in the conversion tool. For details about how to use the conversion tool, see [Converting Models for Inference](https://www.mindspore.cn/lite/docs/en/r2.7.0/converter/converter_tool.html). ### Preparing Environment @@ -32,11 +32,11 @@ The following describes how to prepare the environment for using the conversion You can obtain the conversion tool in either of the following ways: - - Download [Release Version](https://www.mindspore.cn/lite/docs/en/master/use/downloads.html) from the MindSpore official website. + - Download [Release Version](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) from the MindSpore official website. Download the release package whose OS is Linux-x86_64 and hardware platform is CPU. - - Start from the source code for [Building MindSpore Lite](https://www.mindspore.cn/lite/docs/en/master/build/build.html). + - Start from the source code for [Building MindSpore Lite](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html). 3. Decompress the downloaded package. @@ -103,7 +103,7 @@ The following describes how to prepare the environment for using the conversion CONVERT RESULT SUCCESS:0 ``` - For details about the parameters related to converter_lite, see [Converter Parameter Description](https://www.mindspore.cn/lite/docs/en/master/converter/converter_tool.html#parameter-description). + For details about the parameters related to converter_lite, see [Converter Parameter Description](https://www.mindspore.cn/lite/docs/en/r2.7.0/converter/converter_tool.html#parameter-description). After the conversion tool is successfully executed, the generated code is saved in the specified `outputFile` directory. In this example, the mnist folder is in the current conversion directory. The content is as follows: @@ -228,7 +228,7 @@ Table 1: micro_param Parameter Definition CONVERT RESULT SUCCESS:0 ``` - For details about the parameters related to converter_lite, see [Converter Parameter Description](https://www.mindspore.cn/lite/docs/en/master/converter/converter_tool.html#parameter-description). + For details about the parameters related to converter_lite, see [Converter Parameter Description](https://www.mindspore.cn/lite/docs/en/r2.7.0/converter/converter_tool.html#parameter-description). After the conversion tool is successfully executed, the generated code is saved in the specified `save_path` + `project_name` directory. In this example, the mnist folder is in the current conversion directory. The content is as follows: @@ -277,7 +277,7 @@ Table 1: micro_param Parameter Definition Usually, when generating code, you can reduce the probability of errors in the deployment process by configuring the model input shape as the input shape for actual inference. When the model contains a `Shape` operator or the original model has a non-fixed input shape value, the input shape value of the model must be configured to support the relevant shape optimization and code generation. -The `--inputShape=` command of the conversion tool can be used to configure the input shape of the generated code. For specific parameter meanings, please refer to [Conversion Tool Instructions](https://www.mindspore.cn/lite/docs/en/master/converter/converter_tool.html). +The `--inputShape=` command of the conversion tool can be used to configure the input shape of the generated code. For specific parameter meanings, please refer to [Conversion Tool Instructions](https://www.mindspore.cn/lite/docs/en/r2.7.0/converter/converter_tool.html). ### (Optional) Dynamic Shape Configuration @@ -325,7 +325,7 @@ support_parallel=true #### Involved Calling Interfaces By integrating the code and calling the following interfaces, the user can configure the multi-threaded inference of the model. -For specific interface parameters, refer to [API Document](https://www.mindspore.cn/lite/api/en/master/index.html). +For specific interface parameters, refer to [API Document](https://www.mindspore.cn/lite/api/en/r2.7.0/index.html). Table 2: API Interface for Multi-threaded Configuration @@ -349,12 +349,12 @@ At present, this function is only enabled when the `target` is configured as x86 In MCU scenarios such as Cortex-M, limited by the memory size and computing power of the device, Int8 quantization operators are usually used for deployment inference to reduce the runtime memory size and speed up operations. -If the user already has an Int8 full quantitative model, you can refer to the section on [Generating Inference Code by Running converter_lite](https://www.mindspore.cn/lite/docs/en/master/advanced/micro.html#generating-inference-code-by-running-converter-lite) to try to generate Int8 quantitative inference code directly without reading this chapter. +If the user already has an Int8 full quantitative model, you can refer to the section on [Generating Inference Code by Running converter_lite](https://www.mindspore.cn/lite/docs/en/r2.7.0/advanced/micro.html#generating-inference-code-by-running-converter-lite) to try to generate Int8 quantitative inference code directly without reading this chapter. In general, the user has only one trained float32 model. To generate Int8 quantitative inference code at this time, it is necessary to cooperate with the post quantization function of the conversion tool to generate code. See the following for specific steps. #### Configuration -Int8 quantization inference code can be generated by configuring quantization control parameters in the configuration file. For the description of quantization control parameters (universal quantization parameter `common_quant_param` and full quantization parameter `full_quant_param`), please refer to the [Quantization](https://www.mindspore.cn/lite/docs/en/master/advanced/quantization.html). +Int8 quantization inference code can be generated by configuring quantization control parameters in the configuration file. For the description of quantization control parameters (universal quantization parameter `common_quant_param` and full quantization parameter `full_quant_param`), please refer to the [Quantization](https://www.mindspore.cn/lite/docs/en/r2.7.0/advanced/quantization.html). An example of Int8 quantitative inference code generation configuration file for a `Cortex-M` platform is as follows: @@ -411,7 +411,7 @@ target_device=DSP ### Overview The training code can be generated for the input model by using the MindSpore Lite conversion tool `converter_lite` and configuring the Micro configuration item in the parameter configuration file of the conversion tool. -This chapter describes the functions related to code generation in the conversion tool. For details about how to use the conversion tool, see [Converting Models for Training](https://www.mindspore.cn/lite/docs/en/master/train/converter_train.html). +This chapter describes the functions related to code generation in the conversion tool. For details about how to use the conversion tool, see [Converting Models for Training](https://www.mindspore.cn/lite/docs/en/r2.7.0/train/converter_train.html). ### Preparing Environment @@ -491,7 +491,7 @@ For preparing environment section, refer to the [above](#preparing-environment), After generating model inference code, you need to obtain the `Micro` lib on which the generated inference code depends before performing integrated development on the code. The inference code of different platforms depends on the `Micro` lib of the corresponding platform. You need to specify the platform via the micro configuration item `target` based on the platform in use when generating code, and obtain the `Micro` lib of the platform when obtaining the inference package. -You can download the [Release Version](https://www.mindspore.cn/lite/docs/en/master/use/downloads.html) of the corresponding platform from the MindSpore official website. +You can download the [Release Version](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) of the corresponding platform from the MindSpore official website. In chapter [Generating Model Inference Code](#generating-model-inference-code), we obtain the model inference code of the Linux platform with the x86_64 architecture. The `Micro` lib on which the code depends is the release package used by the conversion tool. In the release package, the following content depended by the inference code: @@ -523,7 +523,7 @@ Users can refer to the benchmark routine to integrate and develop the `src` infe ### Calling Interface of Inference Code -The following is the general calling interface of the inference code. For a detailed description of the interface, please refer to the [API documentation](https://www.mindspore.cn/lite/api/en/master/index.html). +The following is the general calling interface of the inference code. For a detailed description of the interface, please refer to the [API documentation](https://www.mindspore.cn/lite/api/en/r2.7.0/index.html). Table 3: Inference Common API Interface @@ -559,9 +559,9 @@ Different platforms have differences in code integration and compilation deploym - For the MCU of the cortex-M architecture, see [Performing Inference on the MCU](#performing-inference-on-the-mcu) -- For the Linux platform with the x86_64 architecture, see [Compilation and Deployment on Linux_x86_64 Platform](https://gitee.com/mindspore/mindspore-lite/tree/master/mindspore-lite/examples/quick_start_micro/mnist_x86) +- For the Linux platform with the x86_64 architecture, see [Compilation and Deployment on Linux_x86_64 Platform](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.0/mindspore-lite/examples/quick_start_micro/mnist_x86) -- For details about how to compile and deploy arm32 or arm64 on the Android platform, see [Compilation and Deployment on Android Platform](https://gitee.com/mindspore/mindspore-lite/tree/master/mindspore-lite/examples/quick_start_micro/mobilenetv2_arm64) +- For details about how to compile and deploy arm32 or arm64 on the Android platform, see [Compilation and Deployment on Android Platform](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.0/mindspore-lite/examples/quick_start_micro/mobilenetv2_arm64) - For compilation and deployment on the OpenHarmony platform, see [Executing Inference on Light Harmony Devices](#executing-inference-on-light-harmony-devices) @@ -619,11 +619,11 @@ mnist # Specified name of generated code root directory The STM32F767 uses the Cortex-M7 architecture. You can obtain the `Micro` lib of the architecture in either of the following ways: -- Download [Release Version](https://www.mindspore.cn/lite/docs/en/master/use/downloads.html) from the MindSpore official website. +- Download [Release Version](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) from the MindSpore official website. You need to download the release package whose OS is None and hardware platform is Cortex-M7. -- Start from the source code for [Building MindSpore Lite](https://www.mindspore.cn/lite/docs/en/master/build/build.html). +- Start from the source code for [Building MindSpore Lite](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html). You can run the `MSLITE_MICRO_PLATFORM=cortex-m7 bash build.sh -I x86_64` command to compile the Cortex-M7 release package. @@ -1004,7 +1004,7 @@ For details about how to develop light Harmony applications, see [Running Hello └── src ``` -Download the [precompiled inference runtime package](https://www.mindspore.cn/lite/docs/en/master/use/downloads.html) for OpenHarmony and decompress it to any Harmony source code path. Compile BUILD.gn file: +Download the [precompiled inference runtime package](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) for OpenHarmony and decompress it to any Harmony source code path. Compile BUILD.gn file: ```text import("//build/lite/config/component/lite_component.gni") @@ -1123,7 +1123,7 @@ name: int8toft32_Softmax-7_post0/output-0, DataType: 43, Elements: 10, Shape: [1 ## Custom Kernel -Please refer to [Custom Kernel](https://www.mindspore.cn/lite/docs/en/master/advanced/third_party/register.html) to understand the basic concepts before using. +Please refer to [Custom Kernel](https://www.mindspore.cn/lite/docs/en/r2.7.0/advanced/third_party/register.html) to understand the basic concepts before using. Micro currently only supports the registration and implementation of custom operators of custom type, and does not support the registration and custom implementation of built-in operators (such as conv2d and fc). We use Hi3516D board as an example to show you how to use kernel register in Micro. @@ -1155,7 +1155,7 @@ The previous step generates the source code directory under the specified path w int CustomKernel(TensorC *inputs, int input_num, TensorC *outputs, int output_num, CustomParameter *param); ``` -Users need to implement this function and add their source files to the cmake project. For example, we provide the custom kernel example dynamic library libmicro_nnie.so that supports NNIE from Hysis, which is included in the [official download page](https://www.mindspore.cn/lite/docs/en/master/use/downloads.html) "NNIE inference runtime lib, benchmark tool" component. Users need to modify the CMakeLists.txt of the generated code, add the name and path of the linked library. +Users need to implement this function and add their source files to the cmake project. For example, we provide the custom kernel example dynamic library libmicro_nnie.so that supports NNIE from Hysis, which is included in the [official download page](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) "NNIE inference runtime lib, benchmark tool" component. Users need to modify the CMakeLists.txt of the generated code, add the name and path of the linked library. ``` shell @@ -1167,7 +1167,7 @@ target_link_libraries(benchmark net micro_nnie nnie mpi VoiceEngine upvqe dnvqe ``` -In the generated `benchmark/benchmark.c` file, add the [NNIE device related initialization code](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/test/config_level0/micro/svp_sys_init.c) before and after calling the main function. +In the generated `benchmark/benchmark.c` file, add the [NNIE device related initialization code](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/test/config_level0/micro/svp_sys_init.c) before and after calling the main function. Finally, we compile the source code: ``` shell @@ -1200,7 +1200,7 @@ Except for MCU, micro inference is a inference model that separates model struct ### Exporting Inference Model -Users can directly refer to [Device-side training](https://www.mindspore.cn/lite/docs/en/master/train/runtime_train_cpp.html). +Users can directly refer to [Device-side training](https://www.mindspore.cn/lite/docs/en/r2.7.0/train/runtime_train_cpp.html). ### Generating Inference Code diff --git a/docs/lite/docs/source_en/advanced/quantization.md b/docs/lite/docs/source_en/advanced/quantization.md index f1d5e5dadadeac10efe21b2e344ebe0f7495fea0..2cb17df981efd71f2e179ded7a94881ae1cda48e 100644 --- a/docs/lite/docs/source_en/advanced/quantization.md +++ b/docs/lite/docs/source_en/advanced/quantization.md @@ -1,6 +1,6 @@ # Quantization -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/docs/source_en/advanced/quantization.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/advanced/quantization.md) ## Overview @@ -114,7 +114,7 @@ For the scenarios where the CV model needs to improve the model running speed an To fully quantize the quantization parameters for calculating the activation values, the user needs to provide a calibration dataset. The calibration dataset should preferably come from real inference scenarios that characterize the actual inputs to the model, in the order of 100 - 500, **and the calibration dataset needs to be processed into `NHWC` format**. -For image data, it currently supports the functions of channel adjustment, normalization, scaling, cropping and other preprocessing. The user can set the appropriate [Data Preprocessing Parameters](https://www.mindspore.cn/lite/docs/en/master/advanced/quantization.html#data-preprocessing-parameters) according to the preprocessing operation required for inference. +For image data, it currently supports the functions of channel adjustment, normalization, scaling, cropping and other preprocessing. The user can set the appropriate [Data Preprocessing Parameters](https://www.mindspore.cn/lite/docs/en/r2.7.0/advanced/quantization.html#data-preprocessing-parameters) according to the preprocessing operation required for inference. User configuration of full quantization requires at least `[common_quant_param]`, `[data_preprocess_param]`, and `[full_quant_param]`. @@ -223,7 +223,7 @@ target_device=DSP #### Ascend -Ascend quantization needs to configure Ascend-related configuration at [offline conversion](https://www.mindspore.cn/lite/docs/en/master/mindir/converter_tool.html#description-of-parameters) first, i.e. `optimize` needs to be set to `ascend_oriented`, and then configure Ascend related environment variables during conversion. +Ascend quantization needs to configure Ascend-related configuration at [offline conversion](https://www.mindspore.cn/lite/docs/en/r2.7.0/mindir/converter_tool.html#description-of-parameters) first, i.e. `optimize` needs to be set to `ascend_oriented`, and then configure Ascend related environment variables during conversion. **Ascend Fully Quantized Static Shape Parameter Configuration** @@ -245,7 +245,7 @@ Ascend quantization needs to configure Ascend-related configuration at [offline target_device=ASCEND ``` -**Ascend full quantization supports dynamic Shape parameters**. The conversion command needs to set the same inputShape of the calibration dataset, which can be found in [Conversion Tool Parameter Description](https://www.mindspore.cn/lite/docs/en/master/mindir/converter_tool.html#description-of-parameters). +**Ascend full quantization supports dynamic Shape parameters**. The conversion command needs to set the same inputShape of the calibration dataset, which can be found in [Conversion Tool Parameter Description](https://www.mindspore.cn/lite/docs/en/r2.7.0/mindir/converter_tool.html#description-of-parameters). - The general form of the conversion command in the Ascend fully quantized static shape scenario is: @@ -301,7 +301,7 @@ quant_strategy=ACWL ## Configuration Parameter -Post training quantization can be enabled by configuring `configFile` through [Conversion Tool](https://www.mindspore.cn/lite/docs/en/master/converter/converter_tool.html). The configuration file adopts the style of [`INI`](https://en.wikipedia.org/wiki/INI_file). For quantization, configurable parameters include: +Post training quantization can be enabled by configuring `configFile` through [Conversion Tool](https://www.mindspore.cn/lite/docs/en/r2.7.0/converter/converter_tool.html). The configuration file adopts the style of [`INI`](https://en.wikipedia.org/wiki/INI_file). For quantization, configurable parameters include: - `[common_quant_param]: Public quantization parameters` - `[weight_quant_param]: Fixed bit quantization parameters` diff --git a/docs/lite/docs/source_en/advanced/third_party.rst b/docs/lite/docs/source_en/advanced/third_party.rst index 201cb6521fffecadec7eb744e06614693d40a8b6..905f10b70c4121ccd22120afbf6148f22776477a 100644 --- a/docs/lite/docs/source_en/advanced/third_party.rst +++ b/docs/lite/docs/source_en/advanced/third_party.rst @@ -1,8 +1,8 @@ Third-party Access ================================= -.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg - :target: https://gitee.com/mindspore/docs/blob/master/docs/lite/docs/source_en/advanced/third_party.rst +.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg + :target: https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/advanced/third_party.rst :alt: View Source On Gitee .. toctree:: diff --git a/docs/lite/docs/source_en/advanced/third_party/ascend_info.md b/docs/lite/docs/source_en/advanced/third_party/ascend_info.md index 46d1bb228bf6c0ac59b26be39b75f58fbeca09eb..c8a2ece3f91d4bd89ca8ca5190f81b10d0305ce4 100644 --- a/docs/lite/docs/source_en/advanced/third_party/ascend_info.md +++ b/docs/lite/docs/source_en/advanced/third_party/ascend_info.md @@ -1,11 +1,11 @@ # Integrated Ascend -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/docs/source_en/advanced/third_party/ascend_info.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/advanced/third_party/ascend_info.md) > - The Ascend backend support on device-side version will be deprecated later. For related usage of the Ascend backend, please refer to the cloud-side inference version documentation. -> - [Build Cloud-side MindSpore Lite](https://mindspore.cn/lite/docs/en/master/mindir/build.html) -> - [Cloud-side Model Converter](https://mindspore.cn/lite/docs/en/master/mindir/converter.html) -> - [Cloud-side Benchmark Tool](https://mindspore.cn/lite/docs/en/master/mindir/benchmark.html) +> - [Build Cloud-side MindSpore Lite](https://mindspore.cn/lite/docs/en/r2.7.0/mindir/build.html) +> - [Cloud-side Model Converter](https://mindspore.cn/lite/docs/en/r2.7.0/mindir/converter.html) +> - [Cloud-side Benchmark Tool](https://mindspore.cn/lite/docs/en/r2.7.0/mindir/benchmark.html) This document describes how to use MindSpore Lite to perform inference and use the dynamic shape function on Linux in the Ascend environment. Currently, MindSpore Lite supports the Atlas 200/300/500 inference product and Atlas inference series. @@ -75,7 +75,7 @@ export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} MindSpore Lite provides an offline model converter to convert various models (Caffe, ONNX, TensorFlow, and MindIR) into models that can be inferred on the Ascend hardware. First, use the converter to convert a model into an `ms` model. Then, use the runtime inference framework matching the converter to perform inference. The process is as follows: -1. [Download](https://www.mindspore.cn/lite/docs/en/master/use/downloads.html) the converter dedicated for Ascend. Currently, only Linux is supported. +1. [Download](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) the converter dedicated for Ascend. Currently, only Linux is supported. 2. Decompress the downloaded package. @@ -115,7 +115,7 @@ First, use the converter to convert a model into an `ms` model. Then, use the ru CONVERT RESULT SUCCESS:0 ``` - For details about parameters of the converter_lite converter, see ["Parameter Description" in Converting Models for Inference](https://www.mindspore.cn/lite/docs/en/master/converter/converter_tool.html#parameter-description). + For details about parameters of the converter_lite converter, see ["Parameter Description" in Converting Models for Inference](https://www.mindspore.cn/lite/docs/en/r2.7.0/converter/converter_tool.html#parameter-description). Note: If the input shape of the original model is uncertain, specify inputShape when using the converter to convert a model. In addition, set configFile to the value of input_shape_vector parameter in acl_option_cfg_param. The command is as follows: @@ -145,12 +145,12 @@ Table 1 [acl_option_cfg_param] parameter configuration ## Runtime -After obtaining the converted model, use the matching runtime inference framework to perform inference. For details about how to use runtime to perform inference, see [Using C++ Interface to Perform Inference](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html). +After obtaining the converted model, use the matching runtime inference framework to perform inference. For details about how to use runtime to perform inference, see [Using C++ Interface to Perform Inference](https://www.mindspore.cn/lite/docs/en/r2.7.0/infer/runtime_cpp.html). ## Executinge the Benchmark MindSpore Lite provides a benchmark test tool, which can be used to perform quantitative (performance) analysis on the execution time consumed by forward inference of the MindSpore Lite model. In addition, you can perform comparative error (accuracy) analysis based on the output of a specified model. -For details about the inference tool, see [benchmark](https://www.mindspore.cn/lite/docs/en/master/tools/benchmark_tool.html). +For details about the inference tool, see [benchmark](https://www.mindspore.cn/lite/docs/en/r2.7.0/tools/benchmark_tool.html). - Performance analysis @@ -170,7 +170,7 @@ For details about the inference tool, see [benchmark](https://www.mindspore.cn/l ### Dynamic Shape -The batch size is not fixed in certain scenarios. For example, in the target detection+facial recognition cascade scenario, the number of detected targets is subject to change, which means that the batch size of the targeted recognition input is dynamic. It would be a great waste of compute resources to perform inferences using the maximum batch size or image size. Thanks to Lite's support for dynamic batch size and dynamic image size on the Atlas 200/300/500 inference product, you can configure the [acl_option_cfg_param] dynamic parameter through configFile to convert a model into an `ms` model, and then use the [resize](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html#resizing-the-input-dimension) function of the model to change the input shape during inference. +The batch size is not fixed in certain scenarios. For example, in the target detection+facial recognition cascade scenario, the number of detected targets is subject to change, which means that the batch size of the targeted recognition input is dynamic. It would be a great waste of compute resources to perform inferences using the maximum batch size or image size. Thanks to Lite's support for dynamic batch size and dynamic image size on the Atlas 200/300/500 inference product, you can configure the [acl_option_cfg_param] dynamic parameter through configFile to convert a model into an `ms` model, and then use the [resize](https://www.mindspore.cn/lite/docs/en/r2.7.0/infer/runtime_cpp.html#resizing-the-input-dimension) function of the model to change the input shape during inference. #### Dynamic Batch Size @@ -204,7 +204,7 @@ The batch size is not fixed in certain scenarios. For example, in the target det - Inference - After the dynamic batch size is enabled, during model inference, the input shape is corresponding to the size configured in converter. To change the input shape, use the model [resize](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html#resizing-the-input-dimension) function. + After the dynamic batch size is enabled, during model inference, the input shape is corresponding to the size configured in converter. To change the input shape, use the model [resize](https://www.mindspore.cn/lite/docs/en/r2.7.0/infer/runtime_cpp.html#resizing-the-input-dimension) function. - Precautions @@ -245,7 +245,7 @@ The batch size is not fixed in certain scenarios. For example, in the target det - Inference - After the dynamic image size is enabled, during model inference, the input shape is corresponding to the size configured in converter. To change the input shape, use the model [resize](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html#resizing-the-input-dimension) function. + After the dynamic image size is enabled, during model inference, the input shape is corresponding to the size configured in converter. To change the input shape, use the model [resize](https://www.mindspore.cn/lite/docs/en/r2.7.0/infer/runtime_cpp.html#resizing-the-input-dimension) function. - Precautions @@ -255,4 +255,4 @@ The batch size is not fixed in certain scenarios. For example, in the target det ## Supported Operators -For details about the supported operators, see [Lite Operator List](https://www.mindspore.cn/lite/docs/en/master/reference/operator_list_lite.html). +For details about the supported operators, see [Lite Operator List](https://www.mindspore.cn/lite/docs/en/r2.7.0/reference/operator_list_lite.html). diff --git a/docs/lite/docs/source_en/advanced/third_party/asic.rst b/docs/lite/docs/source_en/advanced/third_party/asic.rst index d1e5cefc8d9bad85b8b2cf440487b288ea83652a..155733c2d3f02024230da809abea1304352f285a 100644 --- a/docs/lite/docs/source_en/advanced/third_party/asic.rst +++ b/docs/lite/docs/source_en/advanced/third_party/asic.rst @@ -1,8 +1,8 @@ Application Specific Integrated Circuit Integration Instructions ================================================================ -.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg - :target: https://gitee.com/mindspore/docs/blob/master/docs/lite/docs/source_en/advanced/third_party/asic.rst +.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg + :target: https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/advanced/third_party/asic.rst :alt: View Source On Gitee .. toctree:: diff --git a/docs/lite/docs/source_en/advanced/third_party/converter_register.md b/docs/lite/docs/source_en/advanced/third_party/converter_register.md index 3fc7584925061c03dcb3c5fa8580a87b6167aa1a..537c6bd884a352a673d79cf3fdcfefb5159afa9d 100644 --- a/docs/lite/docs/source_en/advanced/third_party/converter_register.md +++ b/docs/lite/docs/source_en/advanced/third_party/converter_register.md @@ -1,20 +1,20 @@ # Building Custom Operators Offline -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/docs/source_en/advanced/third_party/converter_register.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/advanced/third_party/converter_register.md) ## Overview -MindSpore Lite [Conversion Tool](https://www.mindspore.cn/lite/docs/en/master/converter/converter_tool.html), in addition to the basic model conversion function, also supports user-defined model optimization and construction to generate models with user-defined operators. +MindSpore Lite [Conversion Tool](https://www.mindspore.cn/lite/docs/en/r2.7.0/converter/converter_tool.html), in addition to the basic model conversion function, also supports user-defined model optimization and construction to generate models with user-defined operators. We have designed a set of registration mechanism, which allows users to expand, including node-parse extension, model-parse extension and graph-optimization extension. The users can combined them as needed to achieve their own intention. -node-parse extension: The users can define the process to parse a certain node of a model by themselves, which only support ONNX, CAFFE, TF and TFLITE. The related interface is [NodeParser](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_converter_NodeParser.html), [NodeParserRegistry](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_registry_NodeParserRegistry.html). -model-parse extension: The users can define the process to parse a model by themselves, which only support ONNX, CAFFE, TF and TFLITE. The related interface is [ModelParser](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_converter_ModelParser.html), [ModelParserRegistry](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_registry_ModelParserRegistry.html). -graph-optimization extension: After parsing a model, a graph structure defined by MindSpore will show up and then, the users can define the process to optimize the parsed graph. The related interfaces are [PassBase](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_registry_PassBase.html), [PassPosition](https://mindspore.cn/lite/api/en/master/generate/enum_mindspore_registry_PassPosition-1.html), [PassRegistry](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_registry_PassRegistry.html). +node-parse extension: The users can define the process to parse a certain node of a model by themselves, which only support ONNX, CAFFE, TF and TFLITE. The related interface is [NodeParser](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_converter_NodeParser.html), [NodeParserRegistry](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_NodeParserRegistry.html). +model-parse extension: The users can define the process to parse a model by themselves, which only support ONNX, CAFFE, TF and TFLITE. The related interface is [ModelParser](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_converter_ModelParser.html), [ModelParserRegistry](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_ModelParserRegistry.html). +graph-optimization extension: After parsing a model, a graph structure defined by MindSpore will show up and then, the users can define the process to optimize the parsed graph. The related interfaces are [PassBase](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_PassBase.html), [PassPosition](https://mindspore.cn/lite/api/en/r2.7.0/generate/enum_mindspore_registry_PassPosition-1.html), [PassRegistry](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_PassRegistry.html). -> The node-parse extension needs to rely on the flatbuffers, protobuf and the serialization files of third-party frameworks, at the same time, the version of flatbuffers and the protobuf needs to be consistent with that of the released package, the serialized files must be compatible with that used by the released package. Note that the flatbuffers, protobuf and the serialization files are not provided in the released package, users need to compile and generate the serialized files by themselves. The users can obtain the basic information about [flabuffers](https://gitee.com/mindspore/mindspore/blob/master/cmake/external_libs/flatbuffers.cmake), [probobuf](https://gitee.com/mindspore/mindspore/blob/master/cmake/external_libs/protobuf.cmake), [ONNX prototype file](https://gitee.com/mindspore/mindspore/tree/master/third_party/proto/onnx), [CAFFE prototype file](https://gitee.com/mindspore/mindspore/tree/master/third_party/proto/caffe), [TF prototype file](https://gitee.com/mindspore/mindspore/tree/master/third_party/proto/tensorflow) and [TFLITE prototype file](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/tools/converter/parser/tflite/schema.fbs) from the [MindSpore WareHouse](https://gitee.com/mindspore/mindspore/tree/master). +> The node-parse extension needs to rely on the flatbuffers, protobuf and the serialization files of third-party frameworks, at the same time, the version of flatbuffers and the protobuf needs to be consistent with that of the released package, the serialized files must be compatible with that used by the released package. Note that the flatbuffers, protobuf and the serialization files are not provided in the released package, users need to compile and generate the serialized files by themselves. The users can obtain the basic information about [flabuffers](https://gitee.com/mindspore/mindspore/blob/v2.7.0/cmake/external_libs/flatbuffers.cmake), [probobuf](https://gitee.com/mindspore/mindspore/blob/v2.7.0/cmake/external_libs/protobuf.cmake), [ONNX prototype file](https://gitee.com/mindspore/mindspore/tree/v2.7.0/third_party/proto/onnx), [CAFFE prototype file](https://gitee.com/mindspore/mindspore/tree/v2.7.0/third_party/proto/caffe), [TF prototype file](https://gitee.com/mindspore/mindspore/tree/v2.7.0/third_party/proto/tensorflow) and [TFLITE prototype file](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/tools/converter/parser/tflite/schema.fbs) from the [MindSpore WareHouse](https://gitee.com/mindspore/mindspore/tree/v2.7.0). > -> MindSpore Lite alse providers a series of registration macros to facilitate user access. These macros include node-parse registration [REG_NODE_PARSER](https://www.mindspore.cn/lite/api/en/master/generate/define_node_parser_registry.h_REG_NODE_PARSER-1.html), model-parse registration [REG_MODEL_PARSER](https://www.mindspore.cn/lite/api/en/master/generate/define_model_parser_registry.h_REG_MODEL_PARSER-1.html), graph-optimization registration [REG_PASS](https://www.mindspore.cn/lite/api/en/master/generate/define_pass_registry.h_REG_PASS-1.html) and graph-optimization scheduled registration [REG_SCHEDULED_PASS](https://www.mindspore.cn/lite/api/en/master/generate/define_pass_registry.h_REG_SCHEDULED_PASS-1.html) +> MindSpore Lite alse providers a series of registration macros to facilitate user access. These macros include node-parse registration [REG_NODE_PARSER](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/define_node_parser_registry.h_REG_NODE_PARSER-1.html), model-parse registration [REG_MODEL_PARSER](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/define_model_parser_registry.h_REG_MODEL_PARSER-1.html), graph-optimization registration [REG_PASS](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/define_pass_registry.h_REG_PASS-1.html) and graph-optimization scheduled registration [REG_SCHEDULED_PASS](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/define_pass_registry.h_REG_SCHEDULED_PASS-1.html) The expansion capability of MindSpore Lite conversion tool only supports on Linux system currently. @@ -22,15 +22,15 @@ In this chapter, we will show the users a sample of extending MindSpore Lite con > Due to that model-parse extension is a modular extension ability, the chapter will not introduce in details. However, we still provide the users with a simplified unit case for inference. -The chapter takes a [add.tflite](https://download.mindspore.cn/model_zoo/official/lite/quick_start/add.tflite), which only includes an opreator of adding, as an example. We will show the users how to convert the single operator of adding to that of [Custom](https://www.mindspore.cn/lite/docs/en/master/advanced/third_party/register_kernel.html#custom-operators) and finally obtain a model which only includs a single operator of custom. +The chapter takes a [add.tflite](https://download.mindspore.cn/model_zoo/official/lite/quick_start/add.tflite), which only includes an opreator of adding, as an example. We will show the users how to convert the single operator of adding to that of [Custom](https://www.mindspore.cn/lite/docs/en/r2.7.0/advanced/third_party/register_kernel.html#custom-operators) and finally obtain a model which only includs a single operator of custom. -The code related to the example can be obtained from the path [mindspore-lite/examples/converter_extend](https://gitee.com/mindspore/mindspore-lite/tree/master/mindspore-lite/examples/converter_extend). +The code related to the example can be obtained from the path [mindspore-lite/examples/converter_extend](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.0/mindspore-lite/examples/converter_extend). ## Node Extension -1. Self-defined node-parse: The users need to inherit the base class [NodeParser](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_converter_NodeParser.html), and then, choose a interface to override according to model frameworks. +1. Self-defined node-parse: The users need to inherit the base class [NodeParser](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_converter_NodeParser.html), and then, choose a interface to override according to model frameworks. -2. Node-parse Registration: The users can directly call the registration interface [REG_NODE_PARSER](https://www.mindspore.cn/lite/api/en/master/generate/define_node_parser_registry.h_REG_NODE_PARSER-1.html), so that the self-defined node-parse will be registered in the converter tool of MindSpore Lite. +2. Node-parse Registration: The users can directly call the registration interface [REG_NODE_PARSER](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/define_node_parser_registry.h_REG_NODE_PARSER-1.html), so that the self-defined node-parse will be registered in the converter tool of MindSpore Lite. ```c++ class AddParserTutorial : public NodeParser { // inherit the base class @@ -45,17 +45,17 @@ class AddParserTutorial : public NodeParser { // inherit the base class REG_NODE_PARSER(kFmkTypeTflite, ADD, std::make_shared()); // call the registration interface ``` -For the sample code, please refer to [node_parser](https://gitee.com/mindspore/mindspore-lite/tree/master/mindspore-lite/examples/converter_extend/node_parser). +For the sample code, please refer to [node_parser](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.0/mindspore-lite/examples/converter_extend/node_parser). ## Model Extension -For the sample code, please refer to the unit case [ModelParserRegistryTest](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/test/ut/tools/converter/registry/model_parser_registry_test.cc). +For the sample code, please refer to the unit case [ModelParserRegistryTest](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/test/ut/tools/converter/registry/model_parser_registry_test.cc). ### Optimization Extension -1. Self-defined Pass: The users need to inherit the base class [PassBase](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_registry_PassBase.html), and override the interface function [Execute](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_dataset_Execute.html). +1. Self-defined Pass: The users need to inherit the base class [PassBase](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_PassBase.html), and override the interface function [Execute](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_dataset_Execute.html). -2. Pass Registration: The users can directly call the registration interface [REG_PASS](https://www.mindspore.cn/lite/api/en/master/generate/define_pass_registry.h_REG_PASS-1.html), so that the self-defined pass can be registered in the converter tool of MindSpore Lite. +2. Pass Registration: The users can directly call the registration interface [REG_PASS](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/define_pass_registry.h_REG_PASS-1.html), so that the self-defined pass can be registered in the converter tool of MindSpore Lite. ```c++ class PassTutorial : public registry::PassBase { // inherit the base class @@ -75,9 +75,9 @@ REG_PASS(PassTutorial, opt::PassTutorial) // register PassBase's sub REG_SCHEDULED_PASS(POSITION_BEGIN, {"PassTutorial"}) // register scheduling logic ``` -For the sample code, please refer to [pass](https://gitee.com/mindspore/mindspore-lite/tree/master/mindspore-lite/examples/converter_extend/pass). +For the sample code, please refer to [pass](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.0/mindspore-lite/examples/converter_extend/pass). -> In the offline phase of conversion, we will infer the basic information of output tensors of each node of the model, including the format, data type and shape. So, in this phase, users need to provide the inferring process of self-defined operator. Here, users can refer to [Operator Infershape Extension](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html#operator-infershape-extension), and the sample code can be found in [infer](https://gitee.com/mindspore/mindspore-lite/tree/master/mindspore-lite/examples/converter_extend/infer). +> In the offline phase of conversion, we will infer the basic information of output tensors of each node of the model, including the format, data type and shape. So, in this phase, users need to provide the inferring process of self-defined operator. Here, users can refer to [Operator Infershape Extension](https://www.mindspore.cn/lite/docs/en/r2.7.0/infer/runtime_cpp.html#operator-infershape-extension), and the sample code can be found in [infer](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.0/mindspore-lite/examples/converter_extend/infer). ## Example @@ -92,21 +92,21 @@ For the sample code, please refer to [pass](https://gitee.com/mindspore/mindspor - Compilation preparation - The release package of MindSpore Lite doesn't provide serialized files of other frameworks, therefore, users need to compile and obtain by yourselves. Here, please refer to [Overview](https://www.mindspore.cn/lite/docs/en/master/advanced/third_party/converter_register.html#overview). + The release package of MindSpore Lite doesn't provide serialized files of other frameworks, therefore, users need to compile and obtain by yourselves. Here, please refer to [Overview](https://www.mindspore.cn/lite/docs/en/r2.7.0/advanced/third_party/converter_register.html#overview). - The case is a tflite model, users need to compile [flatbuffers](https://gitee.com/mindspore/mindspore/blob/master/cmake/external_libs/flatbuffers.cmake) and combine the [TFLITE Proto File](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/tools/converter/parser/tflite/schema.fbs) to generate the serialized file. + The case is a tflite model, users need to compile [flatbuffers](https://gitee.com/mindspore/mindspore/blob/v2.7.0/cmake/external_libs/flatbuffers.cmake) and combine the [TFLITE Proto File](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/tools/converter/parser/tflite/schema.fbs) to generate the serialized file. After generating, users need to create a directory `schema` under the directory of `mindspore-lite/examples/converter_extend` and then place the serialized file in it. - Compilation and Build - Execute the script [build.sh](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/examples/converter_extend/build.sh) in the directory of `mindspore-lite/examples/converter_extend`. And then, the released package of MindSpore Lite will be downloaded and the demo will be compiled automatically. + Execute the script [build.sh](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/examples/converter_extend/build.sh) in the directory of `mindspore-lite/examples/converter_extend`. And then, the released package of MindSpore Lite will be downloaded and the demo will be compiled automatically. ```bash bash build.sh ``` - > If the automatic download is failed, users can download the specified package manually, of which the hardware platform is CPU and the system is Ubuntu-x64 [mindspore-lite-{version}-linux-x64.tar.gz](https://www.mindspore.cn/lite/docs/en/master/use/downloads.html), After unzipping, please copy the directory of `tools/converter/lib` and `tools/converter/include` to the directory of `mindspore-lite/examples/converter_extend`. + > If the automatic download is failed, users can download the specified package manually, of which the hardware platform is CPU and the system is Ubuntu-x64 [mindspore-lite-{version}-linux-x64.tar.gz](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html), After unzipping, please copy the directory of `tools/converter/lib` and `tools/converter/include` to the directory of `mindspore-lite/examples/converter_extend`. > > After manually downloading and storing the specified file, users need to execute the `build.sh` script to complete the compilation and build process. diff --git a/docs/lite/docs/source_en/advanced/third_party/delegate.md b/docs/lite/docs/source_en/advanced/third_party/delegate.md index 357d2a8aeb9c39320326ebdb912189ec128adf19..e24fe9b83a77e1075fe168a7ff2bea7d312b0548 100644 --- a/docs/lite/docs/source_en/advanced/third_party/delegate.md +++ b/docs/lite/docs/source_en/advanced/third_party/delegate.md @@ -1,6 +1,6 @@ # Using Delegate to Support Third-party AI Framework (Device) -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/docs/source_en/advanced/third_party/delegate.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/advanced/third_party/delegate.md) ## Overview @@ -10,14 +10,14 @@ Delegate of MindSpore Lite is used to support third-party AI frameworks (such as Using Delegate to support a third-party AI framework mainly includes the following steps: -1. Add a custom delegate class: Inherit the [Delegate](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Delegate.html) class to implement XXXDelegate. -2. Implementing the Init Function: The [Init](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Delegate.html) function needs to check whether the device supports the delegate framework and to apply for resources related to delegate. -3. Implementing the Build Function: The [Build](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Delegate.html) function will implement the kernel support judgment, the sub-graph construction, and the online graph building. -4. Implementing the sub-graph Kernel: Inherit the [Kernel](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_kernel_Kernel.html#class-kernel) to implement delegate sub-graph Kernel. +1. Add a custom delegate class: Inherit the [Delegate](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Delegate.html) class to implement XXXDelegate. +2. Implementing the Init Function: The [Init](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Delegate.html) function needs to check whether the device supports the delegate framework and to apply for resources related to delegate. +3. Implementing the Build Function: The [Build](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Delegate.html) function will implement the kernel support judgment, the sub-graph construction, and the online graph building. +4. Implementing the sub-graph Kernel: Inherit the [Kernel](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_kernel_Kernel.html#class-kernel) to implement delegate sub-graph Kernel. ### Adding a Custom Delegate Class -XXXDelegate should inherit from [Delegate](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Delegate.html). In the constructor of XXXDelegate, configure settings for third-party AI framework to build and execute the model, such as NPU frequency, CPU thread number, etc. +XXXDelegate should inherit from [Delegate](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Delegate.html). In the constructor of XXXDelegate, configure settings for third-party AI framework to build and execute the model, such as NPU frequency, CPU thread number, etc. ```cpp class XXXDelegate : public Delegate { @@ -34,7 +34,7 @@ class XXXDelegate : public Delegate { ### Implementing the Init -[Init](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Delegate.html) will be called during the [Build](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Model.html) process of [Model](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Model.html#class-model). The specific location is in the [LiteSession::Init](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/src/litert/lite_session.cc#L696) function of MindSpore Lite internal process. +[Init](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Delegate.html) will be called during the [Build](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Model.html) process of [Model](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Model.html#class-model). The specific location is in the [LiteSession::Init](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/src/litert/lite_session.cc#L696) function of MindSpore Lite internal process. ```cpp Status XXXDelegate::Init() { @@ -45,16 +45,16 @@ Status XXXDelegate::Init() { ### Implementing the Build -The input parameter of the [Build(DelegateModel *model)](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Delegate.html) interface is [DelegateModel](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_DelegateModel.html#template-class-delegatemodel). +The input parameter of the [Build(DelegateModel *model)](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Delegate.html) interface is [DelegateModel](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_DelegateModel.html#template-class-delegatemodel). -> [std::vector *kernels_](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_kernel_Kernel.html): A list of kernels that have been selected by MindSpore Lite and topologically sorted. +> [std::vector *kernels_](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_kernel_Kernel.html): A list of kernels that have been selected by MindSpore Lite and topologically sorted. > -> [const std::map primitives_](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_DelegateModel.html): A map of kernel and its attribute `schema::Primitive`, which is used to analyze the original attribute information. +> [const std::map primitives_](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_DelegateModel.html): A map of kernel and its attribute `schema::Primitive`, which is used to analyze the original attribute information. -[Build](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Delegate.html) will be called during the [Build](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Model.html) process of [Model](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Model.html#class-model). The specific location is in the [Schedule::Schedule](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/src/litert/scheduler.cc#L132) function of MindSpore Lite internal process. At this time, the inner kernels have been selected by MindSpore Lite. The following steps should be implemented in Build function: +[Build](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Delegate.html) will be called during the [Build](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Model.html) process of [Model](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Model.html#class-model). The specific location is in the [Schedule::Schedule](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/src/litert/scheduler.cc#L132) function of MindSpore Lite internal process. At this time, the inner kernels have been selected by MindSpore Lite. The following steps should be implemented in Build function: -1. Traverse the kernel list, use [GetPrimitive](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_DelegateModel.html) to get the attribute of kernel. Analyze the attribute to judge whether the delegate framework supports it. -2. For a continuous supported kernel list, construct a delegate sub-graph kernel and [Replace](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_DelegateModel.html) the continuous supported kernels with it. +1. Traverse the kernel list, use [GetPrimitive](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_DelegateModel.html) to get the attribute of kernel. Analyze the attribute to judge whether the delegate framework supports it. +2. For a continuous supported kernel list, construct a delegate sub-graph kernel and [Replace](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_DelegateModel.html) the continuous supported kernels with it. ```cpp Status XXXDelegate::Build(DelegateModel *model) { @@ -95,10 +95,10 @@ kernel::Kernel *XXXDelegate::CreateXXXGraph(KernelIter from, KernelIter end, Del } ``` -The delegate sub-graph kernel `XXXGraph` should inherit from [Kernel](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_kernel_Kernel.html#class-kernel). The realization of `XXXGraph` should focus on: +The delegate sub-graph kernel `XXXGraph` should inherit from [Kernel](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_kernel_Kernel.html#class-kernel). The realization of `XXXGraph` should focus on: 1. Find the correct in_tensors and out_tensors for `XXXGraph` according to the original kernels list. -2. Rewrite the Prepare, Resize, and Execute interfaces. [Prepare](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_kernel.html#prepare) will be called in [Build](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Model.html) of [Model](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Model.html#class-model). [Execute](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_kernel.html#execute) will be called in [Predict](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Model.html) of Model. [ReSize](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_kernel.html#resize) will be called in [Resize](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Model.html) of Model. +2. Rewrite the Prepare, Resize, and Execute interfaces. [Prepare](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_kernel.html#prepare) will be called in [Build](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Model.html) of [Model](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Model.html#class-model). [Execute](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_kernel.html#execute) will be called in [Predict](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Model.html) of Model. [ReSize](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore_kernel.html#resize) will be called in [Resize](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Model.html) of Model. ```cpp class XXXGraph : public kernel::Kernel { @@ -127,7 +127,7 @@ class XXXGraph : public kernel::Kernel { ## Calling Delegate by Lite Framework -MindSpore Lite schedules user-defined delegate by [Context](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Context.html#class-context). Use [SetDelegate](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#setdelegate) to set a custom delegate for Context. Delegate will be passed by [Build](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Model.html) to MindSpore Lite. If the Delegate in the Context is a null pointer, the process will call the inner inference of MindSpore Lite. +MindSpore Lite schedules user-defined delegate by [Context](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Context.html#class-context). Use [SetDelegate](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#setdelegate) to set a custom delegate for Context. Delegate will be passed by [Build](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Model.html) to MindSpore Lite. If the Delegate in the Context is a null pointer, the process will call the inner inference of MindSpore Lite. ```cpp auto context = std::make_shared(); @@ -156,7 +156,7 @@ if (build_ret != mindspore::kSuccess) { ## Example of NPUDelegate -Currently, MindSpore Lite uses the [NPUDelegate](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/src/litert/delegate/npu/npu_delegate.h#L29) for the NPU backend. This tutorial gives a brief description of NPUDelegate, so that users can quickly understand the usage of Delegate APIs. +Currently, MindSpore Lite uses the [NPUDelegate](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/src/litert/delegate/npu/npu_delegate.h#L29) for the NPU backend. This tutorial gives a brief description of NPUDelegate, so that users can quickly understand the usage of Delegate APIs. ### Adding the NPUDelegate Class @@ -190,7 +190,7 @@ class NPUDelegate : public Delegate { ### Implementing the Init of NPUDelegate -[Init](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/src/litert/delegate/npu/npu_delegate.cc#L75) function is used to apply resource for NPU and determine whether the hardware supports NPU. +[Init](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/src/litert/delegate/npu/npu_delegate.cc#L75) function is used to apply resource for NPU and determine whether the hardware supports NPU. ```cpp Status NPUDelegate::Init() { @@ -217,7 +217,7 @@ Status NPUDelegate::Init() { ### Implementing the Build of NPUDelegate -The [Build](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/src/litert/delegate/npu/npu_delegate.cc#L163) interface parses the DelegateModel and mainly implements the kernel support judgment, the sub-graph construction, and the online graph building. +The [Build](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/src/litert/delegate/npu/npu_delegate.cc#L163) interface parses the DelegateModel and mainly implements the kernel support judgment, the sub-graph construction, and the online graph building. ```cpp Status NPUDelegate::Build(DelegateModel *model) { @@ -257,7 +257,7 @@ Status NPUDelegate::Build(DelegateModel *model) { ### Creating NPUGraph -The following [Sample Code](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/src/litert/delegate/npu/npu_delegate.cc#L273) is the CreateNPUGraph interface of NPUDelegate, used to generate an NPU sub-graph kernel. +The following [Sample Code](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/src/litert/delegate/npu/npu_delegate.cc#L273) is the CreateNPUGraph interface of NPUDelegate, used to generate an NPU sub-graph kernel. ```cpp kernel::Kernel *NPUDelegate::CreateNPUGraph(const std::vector &ops) { @@ -279,7 +279,7 @@ kernel::Kernel *NPUDelegate::CreateNPUGraph(const std::vector &ops) { ### Adding the NPUGraph Class -[NPUGraph](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/src/litert/delegate/npu/npu_graph.h#L29) inherits from [Kernel](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_kernel_Kernel.html#class-kernel). And we need to rewrite the Prepare, Execute, and ReSize interfaces. +[NPUGraph](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/src/litert/delegate/npu/npu_graph.h#L29) inherits from [Kernel](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_kernel_Kernel.html#class-kernel). And we need to rewrite the Prepare, Execute, and ReSize interfaces. ```cpp class NPUGraph : public kernel::Kernel { @@ -306,7 +306,7 @@ class NPUGraph : public kernel::Kernel { }; ``` -[NPUGraph::Prepare](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/src/litert/delegate/npu/npu_graph.cc#L306) mainly implements: +[NPUGraph::Prepare](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/src/litert/delegate/npu/npu_graph.cc#L306) mainly implements: ```cpp int NPUGraph::Prepare() { @@ -314,7 +314,7 @@ int NPUGraph::Prepare() { } ``` -[NPUGraph::Execute](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/src/litert/delegate/npu/npu_graph.cc#L322) mainly implements: +[NPUGraph::Execute](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/src/litert/delegate/npu/npu_graph.cc#L322) mainly implements: ```cpp int NPUGraph::Execute() { @@ -325,4 +325,4 @@ int NPUGraph::Execute() { } ``` -> [NPU](https://www.mindspore.cn/lite/docs/en/master/advanced/third_party/npu_info.html) is a third-party AI framework that added by MindSpore Lite internal developers. The usage of NPU is slightly different. You can set the [Context](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Context.html#class-context) through [SetDelegate](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#setdelegate), or you can add the description of the NPU device [KirinNPUDeviceInfo](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_KirinNPUDeviceInfo.html#class-kirinnpudeviceinfo) to [MutableDeviceInfo](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Context.html) of the Context. +> [NPU](https://www.mindspore.cn/lite/docs/en/r2.7.0/advanced/third_party/npu_info.html) is a third-party AI framework that added by MindSpore Lite internal developers. The usage of NPU is slightly different. You can set the [Context](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Context.html#class-context) through [SetDelegate](https://www.mindspore.cn/lite/api/zh-CN/r2.7.0/api_cpp/mindspore.html#setdelegate), or you can add the description of the NPU device [KirinNPUDeviceInfo](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_KirinNPUDeviceInfo.html#class-kirinnpudeviceinfo) to [MutableDeviceInfo](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Context.html) of the Context. diff --git a/docs/lite/docs/source_en/advanced/third_party/npu_info.md b/docs/lite/docs/source_en/advanced/third_party/npu_info.md index 633dbf99cff450322618f9171b04d7190ecf6713..48829794a8b6c7638e327e14049fcea3f1663745 100644 --- a/docs/lite/docs/source_en/advanced/third_party/npu_info.md +++ b/docs/lite/docs/source_en/advanced/third_party/npu_info.md @@ -1,12 +1,12 @@ # NPU Integration Information -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/docs/source_en/advanced/third_party/npu_info.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/advanced/third_party/npu_info.md) ## Steps ### Environment Preparation -Besides basic [Environment Preparation](https://www.mindspore.cn/lite/docs/en/master/build/build.html), using the NPU requires the integration of the HUAWEI HiAI DDK. +Besides basic [Environment Preparation](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html), using the NPU requires the integration of the HUAWEI HiAI DDK. HUAWEI HiAI DDK, which contains APIs (including building, loading models and calculation processes) and interfaces implemented to encapsulate dynamic libraries (namely libhiai*.so), is required for the use of NPU. Download [DDK 100.510.010.010](https://developer.huawei.com/consumer/en/doc/development/hiai-Library/ddk-download-0000001053590180), and set the directory of extracted files as `${HWHIAI_DDK}`. Our build script uses this environment viriable to seek DDK. @@ -20,7 +20,7 @@ export MSLITE_ENABLE_NPU=ON bash build.sh -I arm64 -j8 ``` -For more information about compilation, see [Linux Environment Compilation](https://www.mindspore.cn/lite/docs/en/master/build/build.html#linux-environment-compilation). +For more information about compilation, see [Linux Environment Compilation](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#linux-environment-compilation). ### Integration @@ -28,10 +28,10 @@ For more information about compilation, see [Linux Environment Compilation](http When developers need to integrate the use of NPU features, it is important to note: - - [Configure the NPU backend](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html#configuring-the-npu-backend). - For more information about using Runtime to perform inference, see [Using Runtime to Perform Inference (C++)](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html). + - [Configure the NPU backend](https://www.mindspore.cn/lite/docs/en/r2.7.0/infer/runtime_cpp.html#configuring-the-npu-backend). + For more information about using Runtime to perform inference, see [Using Runtime to Perform Inference (C++)](https://www.mindspore.cn/lite/docs/en/r2.7.0/infer/runtime_cpp.html). - - Compile and execute the binary. If you use dynamic linking, refer to [compile output](https://www.mindspore.cn/lite/docs/en/master/build/build.html) when the compile option is `-I arm64` or `-I arm32`. + - Compile and execute the binary. If you use dynamic linking, refer to [compile output](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html) when the compile option is `-I arm64` or `-I arm32`. Configured environment variables will dynamically load libhiai.so, libhiai_ir.so, libhiai_ir_build.so, libhiai_hcl_model_runtime.so. For example, ```bash @@ -54,7 +54,7 @@ For more information about compilation, see [Linux Environment Compilation](http ./benchmark --device=NPU --modelFile=./models/test_benchmark.ms --inDataFile=./input/test_benchmark.bin --inputShapes=1,32,32,1 --accuracyThreshold=3 --benchmarkDataFile=./output/test_benchmark.out ``` -For more information about the use of Benchmark, see [Benchmark Use](https://www.mindspore.cn/lite/docs/en/master/tools/benchmark_tool.html). +For more information about the use of Benchmark, see [Benchmark Use](https://www.mindspore.cn/lite/docs/en/r2.7.0/tools/benchmark_tool.html). For environment variable settings, you need to set the directory where the libmindspore-lite.so (under the directory `mindspore-lite-{version}-android-{arch}/runtime/lib`) and NPU libraries (under the directory `mindspore-lite-{version}-android-{arch}/runtime/third_party/hiai_ddk/lib/`) are located, to `${LD_LIBRARY_PATH}`. @@ -64,4 +64,4 @@ For supported NPU chips, see [Chipset Platforms and Supported HUAWEI HiAI Versio ## Supported Operators -For supported NPU operators, see [Lite Operator List](https://www.mindspore.cn/lite/docs/en/master/reference/operator_list_lite.html). \ No newline at end of file +For supported NPU operators, see [Lite Operator List](https://www.mindspore.cn/lite/docs/en/r2.7.0/reference/operator_list_lite.html). \ No newline at end of file diff --git a/docs/lite/docs/source_en/advanced/third_party/register.rst b/docs/lite/docs/source_en/advanced/third_party/register.rst index 82fab65311e404e996c0b0958e9e623cc2144cd1..b76f84c12708d793d11044091aede84cf9fb40e6 100644 --- a/docs/lite/docs/source_en/advanced/third_party/register.rst +++ b/docs/lite/docs/source_en/advanced/third_party/register.rst @@ -1,8 +1,8 @@ Custom Kernel =============== -.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg - :target: https://gitee.com/mindspore/docs/blob/master/docs/lite/docs/source_en/advanced/third_party/register.rst +.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg + :target: https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/advanced/third_party/register.rst :alt: View Source On Gitee .. toctree:: diff --git a/docs/lite/docs/source_en/advanced/third_party/register_kernel.md b/docs/lite/docs/source_en/advanced/third_party/register_kernel.md index 3c13299a5846f0c147182897c8b780c43a7a2684..df7f6847a595d69e26b2b8a20ff2cf05f2af3bb3 100644 --- a/docs/lite/docs/source_en/advanced/third_party/register_kernel.md +++ b/docs/lite/docs/source_en/advanced/third_party/register_kernel.md @@ -1,6 +1,6 @@ # Building Custom Operators Online -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/docs/source_en/advanced/third_party/register_kernel.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/advanced/third_party/register_kernel.md) ## Implementing Custom Operators @@ -18,11 +18,11 @@ View the operator prototype definition in mindspore-lite/schema/ops.fbs. Check w ### Common Operators -For details about code related to implementation, registration, and InferShape of an operator, see [the code repository](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/test/ut/src/registry/registry_test.cc). +For details about code related to implementation, registration, and InferShape of an operator, see [the code repository](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/test/ut/src/registry/registry_test.cc). #### Implementing Common Operators -Inherit [mindspore::kernel::Kernel](https://www.mindspore.cn/lite/api/en/master/api_cpp/mindspore_kernel.html) and overload necessary APIs. The following describes how to customize an Add operator: +Inherit [mindspore::kernel::Kernel](https://www.mindspore.cn/lite/api/en/r2.7.0/api_cpp/mindspore_kernel.html) and overload necessary APIs. The following describes how to customize an Add operator: 1. An operator inherits a kernel. 2. PreProcess() pre-allocates memory. @@ -74,7 +74,7 @@ int TestCustomAdd::Execute() { #### Registering Common Operators -Currently, the generated macro [REGISTER_KERNEL](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_registry_RegisterKernel.html) is provided for operator registration. The implementation procedure is as follows: +Currently, the generated macro [REGISTER_KERNEL](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_RegisterKernel.html) is provided for operator registration. The implementation procedure is as follows: 1. The TestCustomAddCreator function is used to create a kernel. 2. Use the macro REGISTER_KERNEL to register the kernel. Assume that the vendor is BuiltInTest. @@ -96,7 +96,7 @@ REGISTER_KERNEL(CPU, BuiltInTest, kFloat32, PrimitiveType_AddFusion, TestCustomA Reload the Infer function after inheriting KernelInterface to implement the InferShape capability. The implementation procedure is as follows: -1. Inherit [KernelInterface](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_kernel_KernelInterface.html). +1. Inherit [KernelInterface](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_kernel_KernelInterface.html). 2. Overload the Infer function to derive the shape, format, and data_type of the output tensor. The following uses the custom Add operator as an example: @@ -120,7 +120,7 @@ class TestCustomAddInfer : public KernelInterface { #### Registering the Common Operator InferShape -Currently, the generated macro [REGISTER_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_registry_RegisterKernelInterface.html) is provided for registering the operator InferShape. The procedure is as follows: +Currently, the generated macro [REGISTER_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_registry_RegisterKernelInterface.html) is provided for registering the operator InferShape. The procedure is as follows: 1. Use the CustomAddInferCreator function to create a KernelInterface instance. 2. Call the REGISTER_KERNEL_INTERFACE macro to register the common operator InferShape. Assume that the vendor is BuiltInTest. @@ -133,7 +133,7 @@ REGISTER_KERNEL_INTERFACE(BuiltInTest, PrimitiveType_AddFusion, CustomAddInferCr ### Custom Operators -For details about code related to parsing, creating, and operating custom operators, see [the code repository](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/test/ut/tools/converter/registry/pass_registry_test.cc). +For details about code related to parsing, creating, and operating custom operators, see [the code repository](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/test/ut/tools/converter/registry/pass_registry_test.cc). #### Defining Custom Operators @@ -220,11 +220,11 @@ REG_SCHEDULED_PASS(POSITION_BEGIN, schedule) // Set the external Pass sche } // namespace mindspore::opt ``` -For details about code related to implementation, registration, and InferShape of a custom operator, see [the code repository](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/test/ut/src/registry/registry_custom_op_test.cc). +For details about code related to implementation, registration, and InferShape of a custom operator, see [the code repository](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/test/ut/src/registry/registry_custom_op_test.cc). #### Implementing Custom Operators -The implementation procedure of a custom operator is the same as that of a common operator, because they are specific subclasses of [Kernel](https://www.mindspore.cn/lite/api/en/master/api_cpp/mindspore_kernel.html). +The implementation procedure of a custom operator is the same as that of a common operator, because they are specific subclasses of [Kernel](https://www.mindspore.cn/lite/api/en/r2.7.0/api_cpp/mindspore_kernel.html). If the custom operator does not run on the CPU platform, the result needs to be copied back to the output tensor after the running is complete. The following describes how to create a custom operator with the Add capability: 1. An operator inherits a kernel. @@ -295,7 +295,7 @@ In the example, the byte stream in the attribute is copied to the buf. #### Registering Custom Operators -Currently, the generated macro [REGISTER_CUSTOM_KERNEL](https://www.mindspore.cn/lite/api/en/master/generate/define_register_kernel.h_REGISTER_CUSTOM_KERNEL-1.html) is provided for operator registration. The procedure is as follows: +Currently, the generated macro [REGISTER_CUSTOM_KERNEL](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/define_register_kernel.h_REGISTER_CUSTOM_KERNEL-1.html) is provided for operator registration. The procedure is as follows: 1. The TestCustomAddCreator function is used to create a kernel. 2. Use the macro REGISTER_CUSTOM_KERNEL to register an operator. Assume that the vendor is BuiltInTest and the operator type is Add. @@ -316,7 +316,7 @@ REGISTER_CUSTOM_KERNEL(CPU, BuiltInTest, kFloat32, Add, TestCustomAddCreator) The overall implementation is the same as that of the common operator InferShape. The procedure is as follows: -1. Inherit [KernelInterface](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_kernel_KernelInterface.html). +1. Inherit [KernelInterface](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_kernel_KernelInterface.html). 2. Overload the Infer function to derive the shape, format, and data_type of the output tensor. ```cpp @@ -336,10 +336,10 @@ class TestCustomOpInfer : public KernelInterface { #### Registering the Custom Operator InferShape -Currently, the generated macro [REGISTER_CUSTOM_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/en/master/generate/define_register_kernel_interface.h_REGISTER_CUSTOM_KERNEL_INTERFACE-1.html) is provided for registering the custom operator InferShape. The procedure is as follows: +Currently, the generated macro [REGISTER_CUSTOM_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/define_register_kernel_interface.h_REGISTER_CUSTOM_KERNEL_INTERFACE-1.html) is provided for registering the custom operator InferShape. The procedure is as follows: 1. Use the CustomAddInferCreator function to create a custom KernelInterface. -2. The macro [REGISTER_CUSTOM_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/en/master/generate/define_register_kernel_interface.h_REGISTER_CUSTOM_KERNEL_INTERFACE-1.html) is provided for registering the InferShape capability. The operator type Add must be the same as that in REGISTER_CUSTOM_KERNEL_INTERFACE. +2. The macro [REGISTER_CUSTOM_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/define_register_kernel_interface.h_REGISTER_CUSTOM_KERNEL_INTERFACE-1.html) is provided for registering the InferShape capability. The operator type Add must be the same as that in REGISTER_CUSTOM_KERNEL_INTERFACE. ```cpp std::shared_ptr CustomAddInferCreator() { return std::make_shared(); } @@ -349,9 +349,9 @@ REGISTER_CUSTOM_KERNEL_INTERFACE(BuiltInTest, Add, CustomAddInferCreator) ## Custom GPU Operators -A set of GPU-related functional APIs are provided to facilitate the development of the GPU-based custom operator and enable the GPU-based custom operator to share the same resources with the internal GPU-based operators to improve the scheduling efficiency. For details about the APIs, see [mindspore::registry::opencl](https://www.mindspore.cn/lite/api/en/master/api_cpp/mindspore_registry_opencl.html). +A set of GPU-related functional APIs are provided to facilitate the development of the GPU-based custom operator and enable the GPU-based custom operator to share the same resources with the internal GPU-based operators to improve the scheduling efficiency. For details about the APIs, see [mindspore::registry::opencl](https://www.mindspore.cn/lite/api/en/r2.7.0/api_cpp/mindspore_registry_opencl.html). This document describes how to develop a custom GPU operator by parsing sample code. Before reading this document, you need to understand [Implement Custom Operators](#implementing-custom-operators). -The [code repository](https://gitee.com/mindspore/mindspore-lite/blob/master/mindspore-lite/test/ut/src/registry/registry_gpu_custom_op_test.cc) contains implementation and registration of custom GPU operators. +The [code repository](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.0/mindspore-lite/test/ut/src/registry/registry_gpu_custom_op_test.cc) contains implementation and registration of custom GPU operators. ### Registering Operators @@ -394,7 +394,7 @@ std::shared_ptr CustomAddCreator(const std::vector &in #### Registering Operators When registering GPU operators, you must declare the device type as GPU and transfer the operator instance creation function `CustomAddCreator` implemented in the previous step. -In this example, the Float32 implementation of the Custom_Add operator is registered. The registration code is as follows. For details about other parameters in the registration macro, see the [API](https://www.mindspore.cn/lite/api/en/master/api_cpp/mindspore_registry.html). +In this example, the Float32 implementation of the Custom_Add operator is registered. The registration code is as follows. For details about other parameters in the registration macro, see the [API](https://www.mindspore.cn/lite/api/en/r2.7.0/api_cpp/mindspore_registry.html). ```cpp const auto kFloat32 = DataType::kNumberTypeFloat32; @@ -404,7 +404,7 @@ REGISTER_CUSTOM_KERNEL(GPU, BuiltInTest, kFloat32, Custom_Add, CustomAddCreator) ### Implementing Operators -In this example, the operator is implemented as the `CustomAddKernel` class. This class inherits [mindspore::kernel::Kernel](https://www.mindspore.cn/lite/api/en/master/api_cpp/mindspore_kernel.html) and reloads necessary APIs to implement the custom operator computation. +In this example, the operator is implemented as the `CustomAddKernel` class. This class inherits [mindspore::kernel::Kernel](https://www.mindspore.cn/lite/api/en/r2.7.0/api_cpp/mindspore_kernel.html) and reloads necessary APIs to implement the custom operator computation. #### Constructor and Destructor Functions @@ -428,7 +428,7 @@ class CustomAddKernel : public kernel::Kernel { - opencl_runtime_ - An instance of the OpenCLRuntimeWrapper class. In an operator, this object can be used to call the OpenCL-related API [mindspore::registry::opencl](https://www.mindspore.cn/lite/api/en/master/api_cpp/mindspore_registry_opencl.html) provided by MindSpore Lite. + An instance of the OpenCLRuntimeWrapper class. In an operator, this object can be used to call the OpenCL-related API [mindspore::registry::opencl](https://www.mindspore.cn/lite/api/en/r2.7.0/api_cpp/mindspore_registry_opencl.html) provided by MindSpore Lite. - fp16_enable_ @@ -440,7 +440,7 @@ class CustomAddKernel : public kernel::Kernel { - Other variables - Other variables are required for OpenCL operations. For details, see [mindspore::registry::opencl](https://www.mindspore.cn/lite/api/en/master/api_cpp/mindspore_registry_opencl.html). + Other variables are required for OpenCL operations. For details, see [mindspore::registry::opencl](https://www.mindspore.cn/lite/api/en/r2.7.0/api_cpp/mindspore_registry_opencl.html). ```c++ class CustomAddKernel : public kernel::Kernel { diff --git a/docs/lite/docs/source_en/advanced/third_party/tensorrt_info.md b/docs/lite/docs/source_en/advanced/third_party/tensorrt_info.md index d5fed120894f84f05ab3dea7afa978eecda78140..434bdee4a3b21670a8ff92fb009a1b4d5e08dd37 100644 --- a/docs/lite/docs/source_en/advanced/third_party/tensorrt_info.md +++ b/docs/lite/docs/source_en/advanced/third_party/tensorrt_info.md @@ -1,12 +1,12 @@ # TensorRT Integration Information -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/docs/source_en/advanced/third_party/tensorrt_info.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/advanced/third_party/tensorrt_info.md) ## Steps ### Environment Preparation -Besides basic [Environment Preparation](https://www.mindspore.cn/lite/docs/en/master/build/build.html), CUDA and TensorRT is required as well. Current version supports [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base) and [TensorRT 6.0.1.5](https://developer.nvidia.com/nvidia-tensorrt-6x-download), and [CUDA 11.1](https://developer.nvidia.com/cuda-11.1.1-download-archive) and [TensorRT 8.5.1](https://developer.nvidia.com/nvidia-tensorrt-8x-download). +Besides basic [Environment Preparation](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html), CUDA and TensorRT is required as well. Current version supports [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base) and [TensorRT 6.0.1.5](https://developer.nvidia.com/nvidia-tensorrt-6x-download), and [CUDA 11.1](https://developer.nvidia.com/cuda-11.1.1-download-archive) and [TensorRT 8.5.1](https://developer.nvidia.com/nvidia-tensorrt-8x-download). Install the appropriate version of CUDA and set the installed directory as environment variable `${CUDA_HOME}`. Our build script uses this environment variable to seek CUDA. @@ -20,17 +20,17 @@ In the Linux environment, use the build.sh script in the root directory of MindS bash build.sh -I x86_64 ``` -For more information about compilation, see [Linux Environment Compilation](https://www.mindspore.cn/lite/docs/en/master/build/build.html#linux-environment-compilation). +For more information about compilation, see [Linux Environment Compilation](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#linux-environment-compilation). ### Integration - Integration instructions When developers need to integrate the use of TensorRT features, it is important to note: - - [Configure the TensorRT backend](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html#configuring-the-gpu-backend) in the code. - For more information about using Runtime to perform inference, see [Using Runtime to Perform Inference (C++)](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html). + - [Configure the TensorRT backend](https://www.mindspore.cn/lite/docs/en/r2.7.0/infer/runtime_cpp.html#configuring-the-gpu-backend) in the code. + For more information about using Runtime to perform inference, see [Using Runtime to Perform Inference (C++)](https://www.mindspore.cn/lite/docs/en/r2.7.0/infer/runtime_cpp.html). - - Compile and execute the binary. If you use dynamic linking, please refer to [Compilation Output](https://www.mindspore.cn/lite/docs/en/master/build/build.html#directory-structure) with compilation option `-I x86_64`. + - Compile and execute the binary. If you use dynamic linking, please refer to [Compilation Output](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html#directory-structure) with compilation option `-I x86_64`. Please set environment variables to dynamically link related libs. ```bash @@ -41,7 +41,7 @@ For more information about compilation, see [Linux Environment Compilation](http - Using Benchmark testing TensorRT inference - Users can also test TensorRT inference using MindSpore Lite Benchmark tool. The location of the compiled Benchmark is shown in [Compiled Output](https://www.mindspore.cn/lite/docs/en/master/build/build.html). Pass the build package to a device with a TensorRT environment(TensorRT 6.0.1.5) and use the Benchmark tool to test TensorRT inference. Examples are as follows: + Users can also test TensorRT inference using MindSpore Lite Benchmark tool. The location of the compiled Benchmark is shown in [Compiled Output](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html). Pass the build package to a device with a TensorRT environment(TensorRT 6.0.1.5) and use the Benchmark tool to test TensorRT inference. Examples are as follows: - Test performance @@ -55,14 +55,14 @@ For more information about compilation, see [Linux Environment Compilation](http ./benchmark --device=GPU --modelFile=./models/test_benchmark.ms --inDataFile=./input/test_benchmark.bin --inputShapes=1,32,32,1 --accuracyThreshold=3 --benchmarkDataFile=./output/test_benchmark.out ``` - For more information about the use of Benchmark, see [Benchmark Use](https://www.mindspore.cn/lite/docs/en/master/tools/benchmark.html). + For more information about the use of Benchmark, see [Benchmark Use](https://www.mindspore.cn/lite/docs/en/r2.7.0/tools/benchmark.html). For environment variable settings, you need to set the directory where the `libmindspore-lite.so` (under the directory `mindspore-lite-{version}-{os}-{arch}/runtime/lib`), TensorRT and CUDA `so` libraries are located, to `${LD_LIBRARY_PATH}`. - Using TensorRT engine serialization - TensorRT backend inference supports serializing the built TensorRT model (Engine) into a binary file and saves it locally. When it is used the next time, the model can be deserialized and loaded from the local, avoiding rebuilding and reducing overhead. To support this function, users need to use the [LoadConfig](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Model.html) interface to load the configuration file in the code, you need to specify the saving path of serialization file in the configuration file: + TensorRT backend inference supports serializing the built TensorRT model (Engine) into a binary file and saves it locally. When it is used the next time, the model can be deserialized and loaded from the local, avoiding rebuilding and reducing overhead. To support this function, users need to use the [LoadConfig](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Model.html) interface to load the configuration file in the code, you need to specify the saving path of serialization file in the configuration file: ``` [ms_cache] @@ -73,7 +73,7 @@ For more information about compilation, see [Linux Environment Compilation](http By default, TensorRT optimizes the model based on the input shapes (batch size, image size, and so on) at which it was defined. However, the input dimension can be adjusted at runtime by configuring the profile. In the profile, the minimum, dynamic and optimal shape of each input can be set. - TensorRT creates an optimized engine for each profile, choosing CUDA kernels that work for all shapes within the [minimum ~ maximum] range. And in the profile, multiple input dimensions can be configured for a single input. To support this function, users need to use the [LoadConfig](https://www.mindspore.cn/lite/api/en/master/generate/classmindspore_Model.html) interface to load the configuration file in the code. + TensorRT creates an optimized engine for each profile, choosing CUDA kernels that work for all shapes within the [minimum ~ maximum] range. And in the profile, multiple input dimensions can be configured for a single input. To support this function, users need to use the [LoadConfig](https://www.mindspore.cn/lite/api/en/r2.7.0/generate/classmindspore_Model.html) interface to load the configuration file in the code. If min, opt, and Max are the minimum, optimal, and maximum dimensions, and real_shape is the shape of the input tensor, the following conditions must hold: @@ -102,4 +102,4 @@ For more information about compilation, see [Linux Environment Compilation](http ## Supported Operators -For supported TensorRT operators, see [Lite Operator List](https://www.mindspore.cn/lite/docs/en/master/reference/operator_list_lite.html). +For supported TensorRT operators, see [Lite Operator List](https://www.mindspore.cn/lite/docs/en/r2.7.0/reference/operator_list_lite.html). diff --git a/docs/lite/docs/source_en/build/build.md b/docs/lite/docs/source_en/build/build.md index 039e285c46cd7b3017932988049b0a60db75bae4..5576f26f0df0f09116d2eff8ba02afb50e49717c 100644 --- a/docs/lite/docs/source_en/build/build.md +++ b/docs/lite/docs/source_en/build/build.md @@ -1,6 +1,6 @@ # Building Device-side -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/docs/source_en/build/build.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/build/build.md) This chapter introduces how to quickly compile MindSpore Lite, which includes the following modules: @@ -98,7 +98,7 @@ The construction of modules is controlled by environment variables. Users can co | MSLITE_ENABLE_MODEL_PRE_INFERENCE | Whether to enable pre-inference during model compilation | on, off | off | | MSLITE_ENABLE_GITEE_MIRROR | Whether to enable download third_party from gitee mirror | on, off | off | - > - For TensorRT and NPU compilation environment configuration, refer to [Application Specific Integrated Circuit Integration Instructions](https://www.mindspore.cn/lite/docs/en/master/advanced/third_party/asic.html). + > - For TensorRT and NPU compilation environment configuration, refer to [Application Specific Integrated Circuit Integration Instructions](https://www.mindspore.cn/lite/docs/en/r2.7.0/advanced/third_party/asic.html). > - When the AVX instruction set is enabled, the CPU of the running environment needs to support both AVX and FMA features. > - The compilation time of the model conversion tool is long. If it is not necessary, it is recommended to use `MSLITE_ENABLE_CONVERTER` to turn off the compilation of the conversion tool to speed up the compilation. > - The version supported by the OpenSSL encryption library is 1.1.1k, which needs to be downloaded and compiled by the user. For the compilation, please refer to: . In addition, the path of libcrypto.so.1.1 should be added to LD_LIBRARY_PATH. @@ -107,7 +107,7 @@ The construction of modules is controlled by environment variables. Users can co - Runtime feature compilation options - If the user is sensitive to the package size of the framework, the following options can be configured to reduce the package size by reducing the function of the runtime model reasoning framework. Then, the user can further reduce the package size by operator reduction through the [cropper tool](https://www.mindspore.cn/lite/docs/en/master/tools/cropper_tool.html). + If the user is sensitive to the package size of the framework, the following options can be configured to reduce the package size by reducing the function of the runtime model reasoning framework. Then, the user can further reduce the package size by operator reduction through the [cropper tool](https://www.mindspore.cn/lite/docs/en/r2.7.0/tools/cropper_tool.html). | Option | Parameter Description | Value Range | Defaults | | -------- | ----- | ---- | ---- | @@ -126,7 +126,7 @@ The construction of modules is controlled by environment variables. Users can co First, download source code from the MindSpore code repository. ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone -b v2.7.0 https://gitee.com/mindspore/mindspore.git ``` Then, run the following commands in the root directory of the source code to compile MindSpore Lite of different versions: @@ -323,7 +323,7 @@ The script `build.bat` in the root directory of MindSpore can be used to compile First, use the git tool to download the source code from the MindSpore code repository. ```bat -git clone https://gitee.com/mindspore/mindspore.git +git clone -b v2.7.0 https://gitee.com/mindspore/mindspore.git ``` Then, use the cmd tool to compile MindSpore Lite in the root directory of the source code and execute the following commands. @@ -416,7 +416,7 @@ The script `build.sh` in the root directory of MindSpore can be used to compile First, use the git tool to download the source code from the MindSpore code repository. ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone -b v2.7.0 https://gitee.com/mindspore/mindspore.git ``` Then, use the cmd tool to compile MindSpore Lite in the root directory of the source code and execute the following commands. diff --git a/docs/lite/docs/source_en/converter/converter_tool.md b/docs/lite/docs/source_en/converter/converter_tool.md index a524beb359a101355ff69fe13dee2dae1fe2c5bd..df39285cdc669582c38b439cf6918d6377a41bf6 100644 --- a/docs/lite/docs/source_en/converter/converter_tool.md +++ b/docs/lite/docs/source_en/converter/converter_tool.md @@ -1,6 +1,6 @@ # Device-side Models Conversion -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/lite/docs/source_en/converter/converter_tool.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/lite/docs/source_en/converter/converter_tool.md) ## Overview @@ -16,7 +16,7 @@ The `ms` model converted by the conversion tool supports the conversion tool and To use the MindSpore Lite model conversion tool, you need to prepare the environment as follows: -- [Compile](https://www.mindspore.cn/lite/docs/en/master/build/build.html) or [download](https://www.mindspore.cn/lite/docs/en/master/use/downloads.html) model transfer tool. +- [Compile](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html) or [download](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) model transfer tool. - Add the path of dynamic library required by the conversion tool to the environment variables LD_LIBRARY_PATH. @@ -85,9 +85,9 @@ The following describes the parameters in detail. > - The Caffe model is divided into two files: model structure `*.prototxt`, corresponding to the `--modelFile` parameter; model weight `*.caffemodel`, corresponding to the `--weightFile` parameter. > - The priority of `--fp16` option is very low. For example, if quantization is enabled, `--fp16` will no longer take effect on const tensors that have been quantized. All in all, this option only takes effect on const tensors of float32 when serializing model. > - `inputDataFormat`: generally, in the scenario of integrating third-party hardware of NCHW specification, designated as NCHW will have a significant performance improvement over NHWC. In other scenarios, users can also set as needed. -> - The `configFile` configuration files uses the `key=value` mode to define related parameters. For the configuration parameters related to quantization, please refer to [quantization](https://www.mindspore.cn/lite/docs/en/master/advanced/quantization.html). For the configuration parameters related to extension, please refer to [Extension Configuration](https://www.mindspore.cn/lite/docs/en/master/advanced/third_party/converter_register.html#extension-configuration). +> - The `configFile` configuration files uses the `key=value` mode to define related parameters. For the configuration parameters related to quantization, please refer to [quantization](https://www.mindspore.cn/lite/docs/en/r2.7.0/advanced/quantization.html). For the configuration parameters related to extension, please refer to [Extension Configuration](https://www.mindspore.cn/lite/docs/en/r2.7.0/advanced/third_party/converter_register.html#extension-configuration). > - `--optimize` parameter is used to set the mode of optimization during the offline conversion. If this parameter is set to none, no relevant graph optimization operations will be performed during the offline conversion phase of the model, and the relevant graph optimization operations will be done during the execution of the inference phase. The advantage of this parameter is that the converted model can be deployed directly to any CPU/GPU/Ascend hardware backend since it is not optimized in a specific way, while the disadvantage is that the initialization time of the model increases during inference execution. If this parameter is set to general, general optimization will be performed, such as constant folding and operator fusion (the converted model only supports CPU/GPU hardware backend, not Ascend backend). If this parameter is set to gpu_oriented, the general optimization and extra optimization for GPU hardware will be performed (the converted model only supports GPU hardware backend). If this parameter is set to ascend_oriented, the optimization for Ascend hardware will be performed (the converted model only supports Ascend hardware backend). -> - The encryption and decryption function only takes effect when `MSLITE_ENABLE_MODEL_ENCRYPTION=on` is set at [compile](https://www.mindspore.cn/lite/docs/en/master/build/build.html) time and only supports Linux x86 platforms, and the key is a string represented by hexadecimal. Users on the Linux platform can use the `xxd` tool to convert the key represented by the bytes to a hexadecimal representation. +> - The encryption and decryption function only takes effect when `MSLITE_ENABLE_MODEL_ENCRYPTION=on` is set at [compile](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html) time and only supports Linux x86 platforms, and the key is a string represented by hexadecimal. Users on the Linux platform can use the `xxd` tool to convert the key represented by the bytes to a hexadecimal representation. It should be noted that the encryption and decryption algorithm has been updated in version 1.7. As a result, the new version of the converter tool does not support the conversion of the encrypted model exported by MindSpore in version 1.6 and earlier. > - Parameters `--input_shape` and dynamicDims are stored in the model during conversion. Call model.get_model_info("input_shape") and model.get_model_info("dynamic_dims") to get it when using the model. @@ -178,7 +178,7 @@ The following describes how to use the conversion command by using several commo To use the MindSpore Lite model conversion tool, the following environment preparations are required. -- [Compile](https://www.mindspore.cn/lite/docs/en/master/build/build.html) or [download](https://www.mindspore.cn/lite/docs/en/master/use/downloads.html) model transfer tool. +- [Compile](https://www.mindspore.cn/lite/docs/en/r2.7.0/build/build.html) or [download](https://www.mindspore.cn/lite/docs/en/r2.7.0/use/downloads.html) model transfer tool. - Add the path of dynamic library required by the conversion tool to the environment variables PATH. @@ -208,7 +208,7 @@ mindspore-lite-{version}-win-x64 ### Parameter Description -Refer to the Linux environment model conversion tool [parameter description](https://www.mindspore.cn/lite/docs/en/master/converter/converter_tool.html#parameter-description). +Refer to the Linux environment model conversion tool [parameter description](https://www.mindspore.cn/lite/docs/en/r2.7.0/converter/converter_tool.html#parameter-description). ### Example diff --git a/docs/lite/docs/source_en/index.rst b/docs/lite/docs/source_en/index.rst index 627a5481dfd0e1f91c9fff3fc1fd668f1ce9f0aa..e994b853977dfffe7f8d1584feb5d7b6dc257441 100644 --- a/docs/lite/docs/source_en/index.rst +++ b/docs/lite/docs/source_en/index.rst @@ -216,7 +216,7 @@ MindSpore Lite Documentation