From 5fbd9d35a72ed2a75775daad24f747d6e0390dd4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=AE=A6=E6=99=93=E7=8E=B2?= <3174348550@qq.com> Date: Wed, 10 Dec 2025 10:21:43 +0800 Subject: [PATCH] modify links 2.7.2 --- .jenkins/check/config/filter_notebooklint.txt | 2 + docs/lite/api/_custom/sphinx_builder_html | 2 +- .../api/source_en/api_c/lite_c_example.rst | 2 +- .../source_en/api_cpp/lite_cpp_example.rst | 6 +- .../source_en/api_java/ascend_device_info.md | 2 +- .../lite/api/source_en/api_java/class_list.md | 28 +- docs/lite/api/source_en/api_java/graph.md | 2 +- .../source_en/api_java/lite_java_example.rst | 6 +- docs/lite/api/source_en/api_java/model.md | 2 +- .../api_java/model_parallel_runner.md | 2 +- docs/lite/api/source_en/api_java/mscontext.md | 10 +- docs/lite/api/source_en/api_java/mstensor.md | 4 +- .../api/source_en/api_java/runner_config.md | 2 +- docs/lite/api/source_en/api_java/train_cfg.md | 2 +- docs/lite/api/source_en/api_java/version.md | 2 +- docs/lite/api/source_en/index.rst | 116 +- docs/lite/api/source_zh_cn/api_c/context_c.md | 4 +- .../api/source_zh_cn/api_c/data_type_c.md | 2 +- docs/lite/api/source_zh_cn/api_c/format_c.md | 2 +- .../api/source_zh_cn/api_c/lite_c_example.rst | 2 +- docs/lite/api/source_zh_cn/api_c/model_c.md | 10 +- docs/lite/api/source_zh_cn/api_c/status_c.md | 2 +- docs/lite/api/source_zh_cn/api_c/tensor_c.md | 8 +- docs/lite/api/source_zh_cn/api_c/types_c.md | 2 +- .../source_zh_cn/api_cpp/lite_cpp_example.rst | 6 +- .../api/source_zh_cn/api_cpp/mindspore.md | 46 +- .../api_cpp/mindspore_converter.md | 16 +- .../api_cpp/mindspore_datatype.md | 2 +- .../source_zh_cn/api_cpp/mindspore_format.md | 2 +- .../source_zh_cn/api_cpp/mindspore_kernel.md | 36 +- .../api_cpp/mindspore_registry.md | 108 +- .../api_cpp/mindspore_registry_opencl.md | 6 +- .../api_java/ascend_device_info.md | 2 +- .../api/source_zh_cn/api_java/class_list.md | 28 +- docs/lite/api/source_zh_cn/api_java/graph.md | 2 +- .../api_java/lite_java_example.rst | 6 +- docs/lite/api/source_zh_cn/api_java/model.md | 2 +- .../api_java/model_parallel_runner.md | 2 +- .../api/source_zh_cn/api_java/mscontext.md | 10 +- .../api/source_zh_cn/api_java/mstensor.md | 4 +- .../source_zh_cn/api_java/runner_config.md | 2 +- .../api/source_zh_cn/api_java/train_cfg.md | 2 +- .../lite/api/source_zh_cn/api_java/version.md | 2 +- docs/lite/api/source_zh_cn/index.rst | 116 +- .../source_en/advanced/image_processing.md | 12 +- docs/lite/docs/source_en/advanced/micro.md | 44 +- .../docs/source_en/advanced/quantization.md | 10 +- .../docs/source_en/advanced/third_party.rst | 4 +- .../advanced/third_party/ascend_info.md | 24 +- .../source_en/advanced/third_party/asic.rst | 4 +- .../third_party/converter_register.md | 42 +- .../advanced/third_party/delegate.md | 48 +- .../advanced/third_party/npu_info.md | 16 +- .../advanced/third_party/register.rst | 4 +- .../advanced/third_party/register_kernel.md | 38 +- .../advanced/third_party/tensorrt_info.md | 22 +- docs/lite/docs/source_en/build/build.md | 12 +- .../source_en/converter/converter_tool.md | 12 +- docs/lite/docs/source_en/index.rst | 104 +- .../source_en/infer/device_infer_example.rst | 4 +- .../source_en/infer/image_segmentation.md | 16 +- docs/lite/docs/source_en/infer/quick_start.md | 10 +- .../docs/source_en/infer/quick_start_c.md | 24 +- .../docs/source_en/infer/quick_start_cpp.md | 28 +- .../docs/source_en/infer/quick_start_java.md | 20 +- docs/lite/docs/source_en/infer/runtime_cpp.md | 130 +-- .../lite/docs/source_en/infer/runtime_java.md | 58 +- docs/lite/docs/source_en/mindir/benchmark.rst | 4 +- .../docs/source_en/mindir/benchmark_tool.md | 6 +- docs/lite/docs/source_en/mindir/build.md | 4 +- docs/lite/docs/source_en/mindir/converter.rst | 4 +- .../docs/source_en/mindir/converter_python.md | 16 +- .../docs/source_en/mindir/converter_tool.md | 6 +- .../source_en/mindir/converter_tool_ascend.md | 14 +- docs/lite/docs/source_en/mindir/runtime.rst | 4 +- .../lite/docs/source_en/mindir/runtime_cpp.md | 58 +- .../source_en/mindir/runtime_distributed.rst | 4 +- .../mindir/runtime_distributed_cpp.md | 32 +- .../runtime_distributed_multicard_python.md | 22 +- .../mindir/runtime_distributed_python.md | 32 +- .../docs/source_en/mindir/runtime_java.md | 48 +- .../source_en/mindir/runtime_parallel.rst | 4 +- .../source_en/mindir/runtime_parallel_cpp.md | 24 +- .../source_en/mindir/runtime_parallel_java.md | 22 +- .../mindir/runtime_parallel_python.md | 26 +- .../docs/source_en/mindir/runtime_python.md | 30 +- .../quick_start/one_hour_introduction.md | 50 +- .../source_en/reference/architecture_lite.md | 2 +- .../reference/environment_variable_support.md | 2 +- docs/lite/docs/source_en/reference/faq.md | 30 +- .../reference/image_classification_lite.md | 2 +- .../reference/image_segmentation_lite.md | 2 +- docs/lite/docs/source_en/reference/log.md | 2 +- .../docs/source_en/reference/model_lite.rst | 4 +- .../reference/object_detection_lite.md | 2 +- .../reference/operator_list_codegen.md | 2 +- .../source_en/reference/operator_list_lite.md | 2 +- .../reference/operator_list_lite_for_caffe.md | 2 +- .../reference/operator_list_lite_for_onnx.md | 2 +- .../operator_list_lite_for_tensorflow.md | 4 +- .../operator_list_lite_for_tflite.md | 2 +- .../source_en/reference/operator_lite.rst | 4 +- .../reference/scene_detection_lite.md | 2 +- .../reference/style_transfer_lite.md | 2 +- docs/lite/docs/source_en/tools/benchmark.rst | 4 +- .../source_en/tools/benchmark_golden_data.md | 6 +- .../docs/source_en/tools/benchmark_tool.md | 12 +- .../source_en/tools/benchmark_train_tool.md | 8 +- .../lite/docs/source_en/tools/cropper_tool.md | 12 +- .../docs/source_en/tools/obfuscator_tool.md | 4 +- docs/lite/docs/source_en/tools/visual_tool.md | 2 +- .../docs/source_en/train/converter_train.md | 8 +- .../source_en/train/device_train_example.rst | 4 +- .../docs/source_en/train/runtime_train.rst | 4 +- .../docs/source_en/train/runtime_train_cpp.md | 22 +- .../source_en/train/runtime_train_java.md | 30 +- docs/lite/docs/source_en/train/train_lenet.md | 20 +- .../docs/source_en/train/train_lenet_java.md | 10 +- docs/lite/docs/source_en/use/downloads.md | 4 +- .../source_zh_cn/advanced/image_processing.md | 12 +- docs/lite/docs/source_zh_cn/advanced/micro.md | 44 +- .../source_zh_cn/advanced/quantization.md | 10 +- .../source_zh_cn/advanced/third_party.rst | 4 +- .../advanced/third_party/ascend_info.md | 24 +- .../advanced/third_party/asic.rst | 4 +- .../third_party/converter_register.md | 42 +- .../advanced/third_party/delegate.md | 48 +- .../advanced/third_party/npu_info.md | 14 +- .../advanced/third_party/register.rst | 4 +- .../advanced/third_party/register_kernel.md | 38 +- .../advanced/third_party/tensorrt_info.md | 22 +- docs/lite/docs/source_zh_cn/build/build.md | 12 +- .../source_zh_cn/converter/converter_tool.md | 14 +- docs/lite/docs/source_zh_cn/index.rst | 104 +- .../infer/device_infer_example.rst | 4 +- .../source_zh_cn/infer/image_segmentation.md | 16 +- .../docs/source_zh_cn/infer/quick_start.md | 10 +- .../docs/source_zh_cn/infer/quick_start_c.md | 24 +- .../source_zh_cn/infer/quick_start_cpp.md | 30 +- .../source_zh_cn/infer/quick_start_java.md | 20 +- .../docs/source_zh_cn/infer/runtime_cpp.md | 130 +-- .../docs/source_zh_cn/infer/runtime_java.md | 58 +- .../docs/source_zh_cn/mindir/benchmark.rst | 4 +- .../source_zh_cn/mindir/benchmark_tool.md | 6 +- docs/lite/docs/source_zh_cn/mindir/build.md | 4 +- .../docs/source_zh_cn/mindir/converter.rst | 4 +- .../source_zh_cn/mindir/converter_custom.md | 20 +- .../source_zh_cn/mindir/converter_python.md | 16 +- .../source_zh_cn/mindir/converter_tool.md | 6 +- .../mindir/converter_tool_ascend.md | 14 +- .../lite/docs/source_zh_cn/mindir/runtime.rst | 4 +- .../docs/source_zh_cn/mindir/runtime_cpp.md | 60 +- .../mindir/runtime_distributed.rst | 4 +- .../mindir/runtime_distributed_cpp.md | 30 +- .../runtime_distributed_multicard_python.md | 22 +- .../mindir/runtime_distributed_python.md | 30 +- .../docs/source_zh_cn/mindir/runtime_java.md | 48 +- .../source_zh_cn/mindir/runtime_parallel.rst | 4 +- .../mindir/runtime_parallel_cpp.md | 24 +- .../mindir/runtime_parallel_java.md | 22 +- .../mindir/runtime_parallel_python.md | 26 +- .../source_zh_cn/mindir/runtime_python.md | 30 +- .../quick_start/one_hour_introduction.md | 50 +- .../reference/architecture_lite.md | 2 +- .../reference/environment_variable_support.md | 2 +- docs/lite/docs/source_zh_cn/reference/faq.md | 30 +- .../reference/image_classification_lite.md | 2 +- .../reference/image_segmentation_lite.md | 2 +- docs/lite/docs/source_zh_cn/reference/log.md | 2 +- .../source_zh_cn/reference/model_lite.rst | 4 +- .../reference/object_detection_lite.md | 2 +- .../reference/operator_list_codegen.md | 2 +- .../reference/operator_list_lite.md | 2 +- .../reference/operator_list_lite_for_caffe.md | 2 +- .../reference/operator_list_lite_for_onnx.md | 2 +- .../operator_list_lite_for_tensorflow.md | 4 +- .../operator_list_lite_for_tflite.md | 2 +- .../source_zh_cn/reference/operator_lite.rst | 4 +- .../reference/scene_detection_lite.md | 2 +- .../reference/style_transfer_lite.md | 2 +- .../docs/source_zh_cn/tools/benchmark.rst | 4 +- .../tools/benchmark_golden_data.md | 6 +- .../docs/source_zh_cn/tools/benchmark_tool.md | 12 +- .../tools/benchmark_train_tool.md | 8 +- .../docs/source_zh_cn/tools/cropper_tool.md | 12 +- .../source_zh_cn/tools/obfuscator_tool.md | 4 +- .../docs/source_zh_cn/tools/visual_tool.md | 2 +- .../source_zh_cn/train/converter_train.md | 8 +- .../train/device_train_example.rst | 4 +- .../docs/source_zh_cn/train/runtime_train.rst | 4 +- .../source_zh_cn/train/runtime_train_cpp.md | 32 +- .../source_zh_cn/train/runtime_train_java.md | 30 +- .../docs/source_zh_cn/train/train_lenet.md | 20 +- .../source_zh_cn/train/train_lenet_java.md | 10 +- docs/lite/docs/source_zh_cn/use/downloads.md | 4 +- .../accuracy_comparison.md | 4 +- .../advanced_development/dev_migration.md | 4 +- .../inference_precision_comparison.md | 4 +- .../performance_optimization.md | 18 +- .../precision_optimization.md | 8 +- .../training_template_instruction.md | 6 +- .../advanced_development/weight_transfer.md | 2 +- .../yaml_config_inference.md | 2 +- .../contribution/mindformers_contribution.md | 2 +- .../contribution/modelers_contribution.md | 2 +- .../docs/source_en/env_variables.md | 4 +- .../source_en/example/distilled/distilled.md | 4 +- .../docs/source_en/faq/feature_related.md | 2 +- .../docs/source_en/faq/model_related.md | 2 +- .../docs/source_en/feature/ckpt.md | 2 +- .../docs/source_en/feature/configuration.md | 20 +- .../docs/source_en/feature/dataset.md | 10 +- .../source_en/feature/high_availability.md | 2 +- .../feature/load_huggingface_config.md | 2 +- .../docs/source_en/feature/logging.md | 4 +- .../source_en/feature/memory_optimization.md | 6 +- .../docs/source_en/feature/monitor.md | 2 +- .../feature/other_training_features.md | 8 +- .../source_en/feature/parallel_training.md | 14 +- .../source_en/feature/pma_fused_checkpoint.md | 2 +- .../docs/source_en/feature/quantization.md | 2 +- .../docs/source_en/feature/resume_training.md | 2 +- .../docs/source_en/feature/safetensors.md | 12 +- .../skip_data_and_ckpt_health_monitor.md | 4 +- .../docs/source_en/feature/start_tasks.md | 8 +- .../docs/source_en/feature/tokenizer.md | 2 +- .../feature/training_hyperparameters.md | 6 +- .../docs/source_en/guide/deployment.md | 2 +- .../docs/source_en/guide/evaluation.md | 2 +- .../docs/source_en/guide/inference.md | 2 +- .../docs/source_en/guide/pre_training.md | 2 +- .../source_en/guide/supervised_fine_tuning.md | 6 +- .../docs/source_en/installation.md | 2 +- .../docs/source_en/introduction/models.md | 2 +- .../docs/source_en/introduction/overview.md | 2 +- .../accuracy_comparison.md | 4 +- .../advanced_development/dev_migration.md | 2 +- .../inference_precision_comparison.md | 4 +- .../performance_optimization.md | 18 +- .../precision_optimization.md | 8 +- .../training_template_instruction.md | 6 +- .../advanced_development/weight_transfer.md | 2 +- .../yaml_config_inference.md | 2 +- .../contribution/mindformers_contribution.md | 2 +- .../contribution/modelers_contribution.md | 2 +- .../docs/source_zh_cn/env_variables.md | 4 +- .../convert_ckpt_to_megatron.md | 4 +- .../example/distilled/distilled.md | 4 +- .../docs/source_zh_cn/faq/feature_related.md | 2 +- .../docs/source_zh_cn/faq/model_related.md | 2 +- .../docs/source_zh_cn/feature/ckpt.md | 2 +- .../source_zh_cn/feature/configuration.md | 20 +- .../docs/source_zh_cn/feature/dataset.md | 10 +- .../source_zh_cn/feature/high_availability.md | 2 +- .../feature/load_huggingface_config.md | 2 +- .../docs/source_zh_cn/feature/logging.md | 4 +- .../feature/memory_optimization.md | 6 +- .../docs/source_zh_cn/feature/monitor.md | 2 +- .../feature/other_training_features.md | 8 +- .../source_zh_cn/feature/parallel_training.md | 14 +- .../feature/pma_fused_checkpoint.md | 2 +- .../docs/source_zh_cn/feature/quantization.md | 2 +- .../source_zh_cn/feature/resume_training.md | 2 +- .../docs/source_zh_cn/feature/safetensors.md | 12 +- .../skip_data_and_ckpt_health_monitor.md | 4 +- .../docs/source_zh_cn/feature/start_tasks.md | 8 +- .../docs/source_zh_cn/feature/tokenizer.md | 2 +- .../feature/training_hyperparameters.md | 6 +- .../docs/source_zh_cn/guide/deployment.md | 2 +- .../docs/source_zh_cn/guide/evaluation.md | 2 +- .../docs/source_zh_cn/guide/inference.md | 2 +- .../docs/source_zh_cn/guide/llm_training.md | 2 +- .../docs/source_zh_cn/guide/pre_training.md | 2 +- .../guide/supervised_fine_tuning.md | 6 +- .../docs/source_zh_cn/installation.md | 2 +- .../docs/source_zh_cn/introduction/models.md | 2 +- .../source_zh_cn/introduction/overview.md | 2 +- .../api/api_cn/API_sample_and_requirements.md | 4 +- .../source_en/api_python/bfloat16_support.md | 72 +- .../api_python/dynamic_shape_func.md | 482 ++++---- .../source_en/api_python/dynamic_shape_nn.md | 114 +- .../api_python/dynamic_shape_primitive.md | 428 +++---- .../source_en/api_python/env_var_list.rst | 26 +- .../api_python/operator_list_parallel.md | 338 +++--- .../source_en/faq/data_processing.md | 22 +- .../source_en/faq/distributed_parallel.md | 4 +- .../mindspore/source_en/faq/feature_advice.md | 4 +- .../source_en/faq/implement_problem.md | 16 +- docs/mindspore/source_en/faq/inference.md | 2 +- docs/mindspore/source_en/faq/installation.md | 2 +- .../source_en/faq/network_compilation.md | 14 +- .../source_en/faq/operators_compile.md | 22 +- .../source_en/faq/performance_tuning.md | 2 +- .../source_en/faq/precision_tuning.md | 2 +- .../features/compile/compilation_guide.md | 14 +- .../source_en/features/data_engine.md | 20 +- docs/mindspore/source_en/features/overview.md | 6 +- .../features/parallel/auto_parallel.md | 6 +- .../features/parallel/data_parallel.md | 14 +- .../features/parallel/operator_parallel.md | 18 +- .../features/parallel/optimizer_parallel.md | 12 +- .../features/parallel/pipeline_parallel.md | 24 +- .../features/runtime/memory_manager.md | 14 +- .../features/runtime/multilevel_pipeline.md | 6 +- .../runtime/multistream_concurrency.md | 6 +- .../note/api_mapping/pytorch_api_mapping.md | 1036 ++++++++--------- .../note/api_mapping/pytorch_diff/AGNEWS.md | 4 +- .../pytorch_diff/AmazonReviewFull.md | 4 +- .../pytorch_diff/AmazonReviewPolarity.md | 4 +- .../api_mapping/pytorch_diff/AmplitudeToDB.md | 4 +- .../note/api_mapping/pytorch_diff/CIFAR10.md | 4 +- .../note/api_mapping/pytorch_diff/CIFAR100.md | 4 +- .../api_mapping/pytorch_diff/CMUARCTIC.md | 4 +- .../note/api_mapping/pytorch_diff/CelebA.md | 4 +- .../api_mapping/pytorch_diff/Cityscapes.md | 4 +- .../pytorch_diff/CoNLL2000Chunking.md | 4 +- .../api_mapping/pytorch_diff/CocoDataset.md | 4 +- .../note/api_mapping/pytorch_diff/DBpedia.md | 4 +- .../api_mapping/pytorch_diff/DataLoader.md | 4 +- .../pytorch_diff/DistributedSampler.md | 4 +- .../pytorch_diff/FrequencyMasking.md | 4 +- .../note/api_mapping/pytorch_diff/GTZAN.md | 4 +- .../api_mapping/pytorch_diff/GriffinLim.md | 4 +- .../note/api_mapping/pytorch_diff/IMDB.md | 4 +- .../api_mapping/pytorch_diff/IWSLT2016.md | 4 +- .../api_mapping/pytorch_diff/IWSLT2017.md | 4 +- .../api_mapping/pytorch_diff/ImageFolder.md | 4 +- .../pytorch_diff/InverseMelScale.md | 4 +- .../note/api_mapping/pytorch_diff/LIBRITTS.md | 4 +- .../note/api_mapping/pytorch_diff/LJSPEECH.md | 4 +- .../note/api_mapping/pytorch_diff/Lookup.md | 4 +- .../note/api_mapping/pytorch_diff/MNIST.md | 4 +- .../note/api_mapping/pytorch_diff/MelScale.md | 4 +- .../pytorch_diff/MelSpectrogram.md | 4 +- .../note/api_mapping/pytorch_diff/Ngram.md | 4 +- .../api_mapping/pytorch_diff/Normalize.md | 4 +- .../api_mapping/pytorch_diff/PennTreebank.md | 4 +- .../api_mapping/pytorch_diff/RandomAffine.md | 4 +- .../pytorch_diff/RandomPerspective.md | 4 +- .../pytorch_diff/RandomResizedCrop.md | 4 +- .../pytorch_diff/RandomRotation.md | 4 +- .../api_mapping/pytorch_diff/RandomSampler.md | 4 +- .../api_mapping/pytorch_diff/RegexReplace.md | 4 +- .../note/api_mapping/pytorch_diff/Resample.md | 4 +- .../pytorch_diff/SPEECHCOMMANDS.md | 4 +- .../note/api_mapping/pytorch_diff/SQuAD1.md | 4 +- .../note/api_mapping/pytorch_diff/SQuAD2.md | 4 +- .../SentencePieceTokenizer_Out_INT.md | 4 +- .../SentencePieceTokenizer_Out_STRING.md | 4 +- .../pytorch_diff/SequentialSampler.md | 4 +- .../api_mapping/pytorch_diff/SogouNews.md | 4 +- .../pytorch_diff/SpectralCentroid.md | 4 +- .../api_mapping/pytorch_diff/Spectrogram.md | 4 +- .../pytorch_diff/SubsetRandomSampler.md | 4 +- .../note/api_mapping/pytorch_diff/TEDLIUM.md | 4 +- .../api_mapping/pytorch_diff/TimeMasking.md | 4 +- .../note/api_mapping/pytorch_diff/ToPIL.md | 4 +- .../note/api_mapping/pytorch_diff/ToTensor.md | 4 +- .../note/api_mapping/pytorch_diff/TypeCast.md | 4 +- .../note/api_mapping/pytorch_diff/UDPOS.md | 4 +- .../api_mapping/pytorch_diff/VOCDetection.md | 4 +- .../pytorch_diff/VOCSegmentation.md | 4 +- .../pytorch_diff/WeightedRandomSampler.md | 4 +- .../pytorch_diff/WhitespaceTokenizer.md | 4 +- .../api_mapping/pytorch_diff/WikiText103.md | 4 +- .../api_mapping/pytorch_diff/WikiText2.md | 4 +- .../note/api_mapping/pytorch_diff/YESNO.md | 4 +- .../api_mapping/pytorch_diff/YahooAnswers.md | 4 +- .../pytorch_diff/YelpReviewFull.md | 4 +- .../pytorch_diff/YelpReviewPolarity.md | 4 +- .../api_mapping/pytorch_diff/checkpoint.md | 4 +- .../api_mapping/pytorch_diff/deform_conv2d.md | 4 +- .../api_mapping/pytorch_diff/load_sp_model.md | 4 +- .../note/api_mapping/pytorch_diff/nms.md | 4 +- .../api_mapping/pytorch_diff/roi_align.md | 4 +- .../api_python/bfloat16_support.md | 72 +- .../api_python/dynamic_shape_func.md | 482 ++++---- .../api_python/dynamic_shape_nn.md | 114 +- .../api_python/dynamic_shape_primitive.md | 428 +++---- .../source_zh_cn/api_python/env_var_list.rst | 28 +- .../api_python/operator_list_parallel.md | 338 +++--- .../source_zh_cn/faq/data_processing.md | 20 +- .../source_zh_cn/faq/distributed_parallel.md | 4 +- .../source_zh_cn/faq/feature_advice.md | 4 +- .../source_zh_cn/faq/implement_problem.md | 16 +- docs/mindspore/source_zh_cn/faq/inference.md | 2 +- .../source_zh_cn/faq/installation.md | 2 +- .../source_zh_cn/faq/network_compilation.md | 14 +- .../source_zh_cn/faq/operators_compile.md | 22 +- .../source_zh_cn/faq/performance_tuning.md | 2 +- .../source_zh_cn/faq/precision_tuning.md | 2 +- docs/mindspore/source_zh_cn/features/amp.md | 4 +- .../features/compile/compilation_guide.md | 14 +- .../source_zh_cn/features/data_engine.md | 16 +- docs/mindspore/source_zh_cn/features/mint.md | 6 +- .../source_zh_cn/features/overview.md | 6 +- .../features/parallel/auto_parallel.md | 6 +- .../features/parallel/data_parallel.md | 14 +- .../features/parallel/operator_parallel.md | 18 +- .../features/parallel/optimizer_parallel.md | 8 +- .../features/parallel/pipeline_parallel.md | 12 +- .../features/runtime/memory_manager.md | 10 +- .../features/runtime/multilevel_pipeline.md | 2 +- .../runtime/multistream_concurrency.md | 2 +- docs/mindspore/source_zh_cn/features/view.md | 2 +- .../note/api_mapping/pytorch_api_mapping.md | 1036 ++++++++--------- .../note/api_mapping/pytorch_diff/AGNEWS.md | 4 +- .../pytorch_diff/AmazonReviewFull.md | 4 +- .../pytorch_diff/AmazonReviewPolarity.md | 4 +- .../api_mapping/pytorch_diff/AmplitudeToDB.md | 4 +- .../note/api_mapping/pytorch_diff/CIFAR10.md | 4 +- .../note/api_mapping/pytorch_diff/CIFAR100.md | 4 +- .../api_mapping/pytorch_diff/CMUARCTIC.md | 4 +- .../note/api_mapping/pytorch_diff/CelebA.md | 4 +- .../api_mapping/pytorch_diff/Cityscapes.md | 4 +- .../pytorch_diff/CoNLL2000Chunking.md | 4 +- .../api_mapping/pytorch_diff/CocoDataset.md | 4 +- .../note/api_mapping/pytorch_diff/DBpedia.md | 4 +- .../api_mapping/pytorch_diff/DataLoader.md | 4 +- .../pytorch_diff/DistributedSampler.md | 4 +- .../pytorch_diff/FrequencyMasking.md | 4 +- .../note/api_mapping/pytorch_diff/GTZAN.md | 4 +- .../api_mapping/pytorch_diff/GriffinLim.md | 4 +- .../note/api_mapping/pytorch_diff/IMDB.md | 4 +- .../api_mapping/pytorch_diff/IWSLT2016.md | 4 +- .../api_mapping/pytorch_diff/IWSLT2017.md | 4 +- .../api_mapping/pytorch_diff/ImageFolder.md | 4 +- .../pytorch_diff/InverseMelScale.md | 4 +- .../note/api_mapping/pytorch_diff/LIBRITTS.md | 4 +- .../note/api_mapping/pytorch_diff/LJSPEECH.md | 4 +- .../note/api_mapping/pytorch_diff/Lookup.md | 4 +- .../note/api_mapping/pytorch_diff/MNIST.md | 4 +- .../note/api_mapping/pytorch_diff/MelScale.md | 4 +- .../pytorch_diff/MelSpectrogram.md | 4 +- .../note/api_mapping/pytorch_diff/Ngram.md | 4 +- .../api_mapping/pytorch_diff/Normalize.md | 4 +- .../api_mapping/pytorch_diff/PennTreebank.md | 4 +- .../api_mapping/pytorch_diff/RandomAffine.md | 4 +- .../pytorch_diff/RandomPerspective.md | 4 +- .../pytorch_diff/RandomResizedCrop.md | 4 +- .../pytorch_diff/RandomRotation.md | 4 +- .../api_mapping/pytorch_diff/RandomSampler.md | 4 +- .../api_mapping/pytorch_diff/RegexReplace.md | 4 +- .../note/api_mapping/pytorch_diff/Resample.md | 4 +- .../pytorch_diff/SPEECHCOMMANDS.md | 4 +- .../note/api_mapping/pytorch_diff/SQuAD1.md | 4 +- .../note/api_mapping/pytorch_diff/SQuAD2.md | 4 +- .../SentencePieceTokenizer_Out_INT.md | 4 +- .../SentencePieceTokenizer_Out_STRING.md | 4 +- .../pytorch_diff/SequentialSampler.md | 4 +- .../api_mapping/pytorch_diff/SogouNews.md | 4 +- .../pytorch_diff/SpectralCentroid.md | 4 +- .../api_mapping/pytorch_diff/Spectrogram.md | 4 +- .../pytorch_diff/SubsetRandomSampler.md | 4 +- .../note/api_mapping/pytorch_diff/TEDLIUM.md | 4 +- .../api_mapping/pytorch_diff/TimeMasking.md | 4 +- .../note/api_mapping/pytorch_diff/ToPIL.md | 4 +- .../note/api_mapping/pytorch_diff/ToTensor.md | 4 +- .../note/api_mapping/pytorch_diff/TypeCast.md | 4 +- .../note/api_mapping/pytorch_diff/UDPOS.md | 4 +- .../api_mapping/pytorch_diff/VOCDetection.md | 4 +- .../pytorch_diff/VOCSegmentation.md | 4 +- .../pytorch_diff/WeightedRandomSampler.md | 4 +- .../pytorch_diff/WhitespaceTokenizer.md | 4 +- .../api_mapping/pytorch_diff/WikiText103.md | 4 +- .../api_mapping/pytorch_diff/WikiText2.md | 4 +- .../note/api_mapping/pytorch_diff/YESNO.md | 4 +- .../api_mapping/pytorch_diff/YahooAnswers.md | 4 +- .../pytorch_diff/YelpReviewFull.md | 4 +- .../pytorch_diff/YelpReviewPolarity.md | 4 +- .../api_mapping/pytorch_diff/checkpoint.md | 4 +- .../api_mapping/pytorch_diff/deform_conv2d.md | 4 +- .../api_mapping/pytorch_diff/load_sp_model.md | 4 +- .../note/api_mapping/pytorch_diff/nms.md | 4 +- .../api_mapping/pytorch_diff/roi_align.md | 4 +- .../docs/source_zh_cn/feature/performance.md | 10 +- .../docs/source_zh_cn/feature/precision.md | 2 +- .../docs/source_zh_cn/guide/get_start.md | 2 +- .../docs/source_zh_cn/guide/large_model.md | 2 +- docs/mindstudio/docs/source_zh_cn/overview.md | 4 +- .../version/mindstudio_insight.md | 2 +- .../source_en/developer_guide/contributing.md | 2 +- .../developer_guide/operations/custom_ops.md | 6 +- .../docs/source_en/faqs/faqs.md | 2 +- .../docs/source_en/general/security.md | 2 +- .../installation/installation.md | 2 +- .../quick_start/quick_start.md | 2 +- .../deepseek_r1_671b_w8a8_dp4_tp4_ep4.md | 4 +- .../qwen2.5_32b_multiNPU.md | 2 +- .../qwen2.5_7b_singleNPU.md | 2 +- .../source_en/release_notes/release_notes.md | 2 +- .../environment_variables.md | 4 +- .../supported_features/benchmark/benchmark.md | 2 +- .../features_list/features_list.md | 2 +- .../supported_features/parallel/parallel.md | 4 +- .../supported_features/profiling/profiling.md | 6 +- .../quantization/quantization.md | 2 +- .../models_list/models_list.md | 2 +- .../developer_guide/contributing.md | 2 +- .../developer_guide/operations/custom_ops.md | 6 +- .../docs/source_zh_cn/faqs/faqs.md | 2 +- .../docs/source_zh_cn/general/security.md | 2 +- .../installation/installation.md | 2 +- .../quick_start/quick_start.md | 2 +- .../deepseek_r1_671b_w8a8_dp4_tp4_ep4.md | 4 +- .../qwen2.5_32b_multiNPU.md | 2 +- .../qwen2.5_7b_singleNPU.md | 2 +- .../release_notes/release_notes.md | 2 +- .../environment_variables.md | 4 +- .../supported_features/benchmark/benchmark.md | 2 +- .../features_list/features_list.md | 2 +- .../supported_features/parallel/parallel.md | 4 +- .../supported_features/profiling/profiling.md | 6 +- .../quantization/quantization.md | 2 +- .../models_list/models_list.md | 2 +- install/mindspore_ascend_install_conda.md | 2 +- install/mindspore_ascend_install_conda_en.md | 2 +- install/mindspore_ascend_install_docker.md | 2 +- install/mindspore_ascend_install_docker_en.md | 2 +- install/mindspore_ascend_install_pip.md | 2 +- install/mindspore_ascend_install_pip_en.md | 2 +- install/mindspore_ascend_install_source.md | 4 +- install/mindspore_ascend_install_source_en.md | 26 +- install/mindspore_cpu_install_conda.md | 2 +- install/mindspore_cpu_install_conda_en.md | 2 +- install/mindspore_cpu_install_pip.md | 2 +- install/mindspore_cpu_install_pip_en.md | 2 +- install/mindspore_cpu_install_source.md | 4 +- install/mindspore_cpu_install_source_en.md | 4 +- install/mindspore_cpu_mac_install_conda.md | 2 +- install/mindspore_cpu_mac_install_conda_en.md | 2 +- install/mindspore_cpu_mac_install_pip.md | 2 +- install/mindspore_cpu_mac_install_pip_en.md | 2 +- install/mindspore_cpu_mac_install_source.md | 4 +- .../mindspore_cpu_mac_install_source_en.md | 4 +- install/mindspore_cpu_win_install_conda.md | 2 +- install/mindspore_cpu_win_install_conda_en.md | 2 +- install/mindspore_cpu_win_install_pip.md | 2 +- install/mindspore_cpu_win_install_pip_en.md | 2 +- install/mindspore_cpu_win_install_source.md | 6 +- .../mindspore_cpu_win_install_source_en.md | 6 +- install/third_party/msys_software_install.md | 2 +- .../third_party/msys_software_install_en.md | 2 +- .../third_party/third_party_cpu_install.md | 4 +- .../beginner/accelerate_with_static_graph.md | 12 +- tutorials/source_en/beginner/autograd.md | 12 +- tutorials/source_en/beginner/dataset.md | 26 +- tutorials/source_en/beginner/introduction.md | 8 +- tutorials/source_en/beginner/model.md | 20 +- tutorials/source_en/beginner/quick_start.md | 22 +- tutorials/source_en/beginner/save_load.md | 16 +- tutorials/source_en/beginner/tensor.md | 20 +- tutorials/source_en/beginner/train.md | 12 +- tutorials/source_en/compile/fusion_pass.md | 2 +- tutorials/source_en/compile/operators.md | 2 +- .../compile/python_builtin_functions.md | 14 +- tutorials/source_en/compile/statements.md | 4 +- tutorials/source_en/compile/static_graph.md | 40 +- .../static_graph_expert_programming.md | 12 +- .../source_en/custom_program/hook_program.md | 14 +- .../source_en/custom_program/op_custom.rst | 12 +- .../operation/cpp_api_for_custom_ops.md | 6 +- .../custom_program/operation/op_custom_adv.md | 4 +- .../custom_program/operation/op_custom_aot.md | 10 +- .../operation/op_custom_ascendc.md | 10 +- .../operation/op_custom_prim.rst | 14 +- .../operation/op_customopbuilder.md | 12 +- .../operation/op_customopbuilder_aclnn.md | 6 +- .../operation/op_customopbuilder_asdsip.md | 6 +- .../operation/op_customopbuilder_atb.md | 6 +- tutorials/source_en/cv/fcn8s.md | 14 +- tutorials/source_en/cv/resnet50.md | 16 +- tutorials/source_en/cv/ssd.md | 34 +- tutorials/source_en/cv/transfer_learning.md | 4 +- tutorials/source_en/cv/vit.md | 26 +- tutorials/source_en/dataset/augment.md | 4 +- tutorials/source_en/dataset/cache.md | 8 +- .../source_en/dataset/dataset_autotune.md | 10 +- .../source_en/dataset/dataset_offload.md | 2 +- tutorials/source_en/dataset/eager.md | 18 +- tutorials/source_en/dataset/optimize.ipynb | 38 +- tutorials/source_en/dataset/overview.md | 52 +- tutorials/source_en/dataset/python_objects.md | 2 +- tutorials/source_en/dataset/record.ipynb | 12 +- tutorials/source_en/dataset/sampler.md | 12 +- tutorials/source_en/debug/dryrun.md | 6 +- tutorials/source_en/debug/dump.md | 24 +- tutorials/source_en/debug/error_analysis.rst | 24 +- .../debug/error_analysis/cann_error_cases.md | 2 +- .../error_analysis/error_scenario_analysis.md | 44 +- .../debug/error_analysis/minddata_debug.md | 10 +- .../source_en/debug/error_analysis/mindir.md | 2 +- .../debug/error_analysis/mindrt_debug.md | 6 +- tutorials/source_en/debug/profiler.md | 28 +- tutorials/source_en/debug/pynative.md | 10 +- tutorials/source_en/debug/sdc.md | 4 +- tutorials/source_en/generative/cyclegan.md | 10 +- tutorials/source_en/generative/dcgan.md | 12 +- tutorials/source_en/generative/diffusion.md | 12 +- tutorials/source_en/generative/gan.md | 12 +- tutorials/source_en/generative/pix2pix.md | 10 +- .../source_en/model_infer/introduction.md | 2 +- .../model_infer/lite_infer/overview.md | 14 +- .../ms_infer/ms_infer_model_infer.rst | 6 +- .../ms_infer/ms_infer_model_serving_infer.md | 2 +- .../ms_infer/ms_infer_network_develop.md | 6 +- .../ms_infer/ms_infer_parallel_infer.md | 16 +- .../ms_infer/ms_infer_quantization.md | 4 +- .../model_migration/model_migration.md | 22 +- tutorials/source_en/nlp/sentiment_analysis.md | 22 +- tutorials/source_en/nlp/sequence_labeling.md | 2 +- tutorials/source_en/orange_pi/dev_start.md | 16 +- .../source_en/orange_pi/environment_setup.md | 34 +- tutorials/source_en/orange_pi/model_infer.md | 12 +- tutorials/source_en/orange_pi/overview.md | 8 +- tutorials/source_en/parallel/comm_fusion.md | 12 +- tutorials/source_en/parallel/data_parallel.md | 12 +- tutorials/source_en/parallel/dataset_slice.md | 10 +- .../source_en/parallel/distributed_case.rst | 4 +- .../distributed_gradient_accumulation.md | 12 +- .../source_en/parallel/dynamic_cluster.md | 14 +- .../high_dimension_tensor_parallel.md | 16 +- .../parallel/host_device_training.md | 18 +- tutorials/source_en/parallel/mpirun.md | 4 +- .../source_en/parallel/msrun_launcher.md | 20 +- tutorials/source_en/parallel/multiple_copy.md | 14 +- .../source_en/parallel/multiple_mixed.md | 6 +- .../source_en/parallel/operator_parallel.md | 22 +- .../source_en/parallel/optimize_technique.rst | 22 +- .../source_en/parallel/optimizer_parallel.md | 8 +- tutorials/source_en/parallel/overview.md | 32 +- .../source_en/parallel/pipeline_parallel.md | 14 +- tutorials/source_en/parallel/rank_table.md | 4 +- tutorials/source_en/parallel/recompute.md | 16 +- .../source_en/parallel/split_technique.md | 12 +- .../source_en/parallel/startup_method.rst | 12 +- .../source_en/parallel/strategy_select.md | 10 +- .../train_availability/fault_recover.md | 6 +- .../train_availability/graceful_exit.md | 6 +- .../accelerate_with_static_graph.ipynb | 12 +- .../source_zh_cn/beginner/autograd.ipynb | 12 +- tutorials/source_zh_cn/beginner/dataset.ipynb | 28 +- .../source_zh_cn/beginner/introduction.ipynb | 8 +- tutorials/source_zh_cn/beginner/model.ipynb | 20 +- .../source_zh_cn/beginner/quick_start.ipynb | 22 +- .../source_zh_cn/beginner/save_load.ipynb | 20 +- tutorials/source_zh_cn/beginner/tensor.ipynb | 22 +- tutorials/source_zh_cn/beginner/train.ipynb | 12 +- tutorials/source_zh_cn/compile/fusion_pass.md | 2 +- tutorials/source_zh_cn/compile/operators.md | 2 +- .../compile/python_builtin_functions.ipynb | 14 +- .../source_zh_cn/compile/statements.ipynb | 4 +- .../source_zh_cn/compile/static_graph.md | 38 +- .../static_graph_expert_programming.ipynb | 16 +- .../custom_program/hook_program.ipynb | 14 +- .../source_zh_cn/custom_program/op_custom.rst | 10 +- .../operation/cpp_api_for_custom_ops.md | 6 +- .../operation/op_custom_adv.ipynb | 4 +- .../custom_program/operation/op_custom_aot.md | 10 +- .../operation/op_custom_ascendc.md | 10 +- .../operation/op_custom_prim.ipynb | 12 +- .../operation/op_customopbuilder.md | 12 +- .../operation/op_customopbuilder_aclnn.md | 6 +- .../operation/op_customopbuilder_asdsip.md | 6 +- .../operation/op_customopbuilder_atb.md | 6 +- tutorials/source_zh_cn/cv/fcn8s.ipynb | 14 +- tutorials/source_zh_cn/cv/resnet50.ipynb | 16 +- tutorials/source_zh_cn/cv/ssd.ipynb | 34 +- .../source_zh_cn/cv/transfer_learning.ipynb | 6 +- tutorials/source_zh_cn/cv/vit.ipynb | 26 +- tutorials/source_zh_cn/dataset/augment.ipynb | 4 +- tutorials/source_zh_cn/dataset/cache.ipynb | 12 +- .../source_zh_cn/dataset/dataset_autotune.md | 10 +- .../source_zh_cn/dataset/dataset_offload.md | 2 +- tutorials/source_zh_cn/dataset/eager.ipynb | 20 +- tutorials/source_zh_cn/dataset/optimize.ipynb | 38 +- tutorials/source_zh_cn/dataset/overview.ipynb | 52 +- .../source_zh_cn/dataset/python_objects.ipynb | 2 +- tutorials/source_zh_cn/dataset/record.ipynb | 16 +- tutorials/source_zh_cn/dataset/sampler.ipynb | 16 +- tutorials/source_zh_cn/debug/dryrun.md | 6 +- tutorials/source_zh_cn/debug/dump.md | 24 +- .../source_zh_cn/debug/error_analysis.rst | 22 +- .../debug/error_analysis/cann_error_cases.md | 2 +- .../error_analysis/error_scenario_analysis.md | 38 +- .../debug/error_analysis/minddata_debug.md | 10 +- .../debug/error_analysis/mindir.md | 2 +- .../debug/error_analysis/mindrt_debug.md | 6 +- tutorials/source_zh_cn/debug/profiler.md | 28 +- tutorials/source_zh_cn/debug/pynative.md | 10 +- tutorials/source_zh_cn/debug/sdc.md | 4 +- .../source_zh_cn/generative/cyclegan.ipynb | 10 +- tutorials/source_zh_cn/generative/dcgan.ipynb | 12 +- .../source_zh_cn/generative/diffusion.ipynb | 12 +- tutorials/source_zh_cn/generative/gan.ipynb | 12 +- .../source_zh_cn/generative/pix2pix.ipynb | 10 +- .../source_zh_cn/model_infer/introduction.md | 2 +- .../model_infer/lite_infer/overview.md | 14 +- .../ms_infer/ms_infer_model_infer.rst | 6 +- .../ms_infer/ms_infer_model_serving_infer.md | 2 +- .../ms_infer/ms_infer_network_develop.md | 4 +- .../ms_infer/ms_infer_parallel_infer.md | 4 +- .../ms_infer/ms_infer_quantization.md | 4 +- .../model_migration/model_migration.md | 20 +- .../source_zh_cn/nlp/sentiment_analysis.ipynb | 22 +- .../source_zh_cn/nlp/sequence_labeling.ipynb | 2 +- .../source_zh_cn/orange_pi/dev_start.ipynb | 16 +- .../orange_pi/environment_setup.md | 4 +- .../source_zh_cn/orange_pi/model_infer.md | 6 +- tutorials/source_zh_cn/orange_pi/overview.md | 8 +- .../source_zh_cn/parallel/comm_fusion.md | 10 +- .../source_zh_cn/parallel/data_parallel.md | 12 +- .../source_zh_cn/parallel/dataset_slice.md | 10 +- .../parallel/distributed_case.rst | 4 +- .../distributed_gradient_accumulation.md | 12 +- .../source_zh_cn/parallel/dynamic_cluster.md | 14 +- .../high_dimension_tensor_parallel.md | 10 +- .../parallel/host_device_training.md | 14 +- tutorials/source_zh_cn/parallel/mpirun.md | 4 +- .../source_zh_cn/parallel/msrun_launcher.md | 20 +- .../source_zh_cn/parallel/multiple_copy.md | 12 +- .../source_zh_cn/parallel/multiple_mixed.md | 6 +- .../parallel/operator_parallel.md | 22 +- .../parallel/optimize_technique.rst | 22 +- .../parallel/optimizer_parallel.md | 8 +- tutorials/source_zh_cn/parallel/overview.md | 32 +- .../parallel/pipeline_parallel.md | 14 +- tutorials/source_zh_cn/parallel/rank_table.md | 4 +- tutorials/source_zh_cn/parallel/recompute.md | 12 +- .../source_zh_cn/parallel/split_technique.md | 6 +- .../source_zh_cn/parallel/startup_method.rst | 12 +- .../source_zh_cn/parallel/strategy_select.md | 10 +- .../train_availability/fault_recover.md | 6 +- .../train_availability/graceful_exit.md | 6 +- 734 files changed, 6161 insertions(+), 6177 deletions(-) create mode 100644 .jenkins/check/config/filter_notebooklint.txt diff --git a/.jenkins/check/config/filter_notebooklint.txt b/.jenkins/check/config/filter_notebooklint.txt new file mode 100644 index 0000000000..8d547b5b1b --- /dev/null +++ b/.jenkins/check/config/filter_notebooklint.txt @@ -0,0 +1,2 @@ +"docs/tutorials/source_zh_cn" +"docs/tutorials/source_en/dataset" \ No newline at end of file diff --git a/docs/lite/api/_custom/sphinx_builder_html b/docs/lite/api/_custom/sphinx_builder_html index 3518c2e3c3..ca95bb1260 100644 --- a/docs/lite/api/_custom/sphinx_builder_html +++ b/docs/lite/api/_custom/sphinx_builder_html @@ -1116,7 +1116,7 @@ class StandaloneHTMLBuilder(Builder): # Add links to the Python operator interface. if "mindspore.ops." in output: - output = re.sub(r'(mindspore\.ops\.\w+) ', r'\1 ', output, count=0) + output = re.sub(r'(mindspore\.ops\.\w+) ', r'\1 ', output, count=0) except UnicodeError: logger.warning(__("a Unicode error occurred when rendering the page %s. " diff --git a/docs/lite/api/source_en/api_c/lite_c_example.rst b/docs/lite/api/source_en/api_c/lite_c_example.rst index 877f7a048a..d7fb80b3e2 100644 --- a/docs/lite/api/source_en/api_c/lite_c_example.rst +++ b/docs/lite/api/source_en/api_c/lite_c_example.rst @@ -4,4 +4,4 @@ Example .. toctree:: :maxdepth: 1 - Simple Demo↗ + Simple Demo↗ diff --git a/docs/lite/api/source_en/api_cpp/lite_cpp_example.rst b/docs/lite/api/source_en/api_cpp/lite_cpp_example.rst index f6ae1bd446..becd55cf71 100644 --- a/docs/lite/api/source_en/api_cpp/lite_cpp_example.rst +++ b/docs/lite/api/source_en/api_cpp/lite_cpp_example.rst @@ -4,6 +4,6 @@ Example .. toctree:: :maxdepth: 1 - Simple Demo↗ - Android Application Development Based on JNI Interface↗ - High-level Usage↗ \ No newline at end of file + Simple Demo↗ + Android Application Development Based on JNI Interface↗ + High-level Usage↗ \ No newline at end of file diff --git a/docs/lite/api/source_en/api_java/ascend_device_info.md b/docs/lite/api/source_en/api_java/ascend_device_info.md index 3423232687..ddbe79e030 100644 --- a/docs/lite/api/source_en/api_java/ascend_device_info.md +++ b/docs/lite/api/source_en/api_java/ascend_device_info.md @@ -1,6 +1,6 @@ # AscendDeviceInfo -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_en/api_java/ascend_device_info.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_en/api_java/ascend_device_info.md) ```java import com.mindspore.config.AscendDeviceInfo; diff --git a/docs/lite/api/source_en/api_java/class_list.md b/docs/lite/api/source_en/api_java/class_list.md index d6a0d1e1f8..9b471301f1 100644 --- a/docs/lite/api/source_en/api_java/class_list.md +++ b/docs/lite/api/source_en/api_java/class_list.md @@ -1,20 +1,20 @@ # Class List -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_en/api_java/class_list.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_en/api_java/class_list.md) | Package | Class Name | Description | Supported At Cloud-side Inference | Supported At Device-side Inference | | ------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |--------|--------| -| com.mindspore | [Model](https://www.mindspore.cn/lite/api/en/r2.7.1/api_java/model.html) | Model defines model in MindSpore for compiling and running compute graph. | √ | √ | -| com.mindspore.config | [MSContext](https://www.mindspore.cn/lite/api/en/r2.7.1/api_java/mscontext.html) | MSContext is used to save the context during execution. | √ | √ | -| com.mindspore | [MSTensor](https://www.mindspore.cn/lite/api/en/r2.7.1/api_java/mstensor.html) | MSTensor defines the tensor in MindSpore. | √ | √ | -| com.mindspore | [ModelParallelRunner](https://www.mindspore.cn/lite/api/en/r2.7.1/api_java/model_parallel_runner.html) | Defines MindSpore Lite concurrent inference. | √ | ✕ | -| com.mindspore.config | [RunnerConfig](https://www.mindspore.cn/lite/api/en/r2.7.1/api_java/runner_config.html) | RunnerConfig defines configuration parameters for concurrent inference. | √ | ✕ | -| com.mindspore | [Graph](https://www.mindspore.cn/lite/api/en/r2.7.1/api_java/graph.html) | Graph defines the compute graph in MindSpore. | ✕ | √ | -| com.mindspore.config | [CpuBindMode](https://www.mindspore.cn/lite/api/en/r2.7.1/api_java/mscontext.html#cpubindmode) | CpuBindMode defines the CPU binding mode. | √ | √ | -| com.mindspore.config | [DeviceType](https://www.mindspore.cn/lite/api/en/r2.7.1/api_java/mscontext.html#devicetype) | DeviceType defines the back-end device type. | √ | √ | -| com.mindspore.config | [DataType](https://www.mindspore.cn/lite/api/en/r2.7.1/api_java/mstensor.html#datatype) | DataType defines the supported data types. | √ | √ | -| com.mindspore.config | [Version](https://www.mindspore.cn/lite/api/en/r2.7.1/api_java/version.html) | Version is used to obtain the version information of MindSpore. | ✕ | √ | -| com.mindspore.config | [ModelType](https://www.mindspore.cn/lite/api/en/r2.7.1/api_java/model.html#modeltype) | ModelType defines the model file type. | √ | √ | -| com.mindspore.config | [AscendDeviceInfo](https://www.mindspore.cn/lite/api/en/r2.7.1/api_java/ascend_device_info.html) | The AscendDeviceInfo class is used to configure MindSpore Lite Ascend device options. | √ | ✕ | -| com.mindspore.config | [TrainCfg](https://www.mindspore.cn/lite/api/en/r2.7.1/api_java/train_cfg.html) | Configuration parameters used for model training on the device. | ✕ | √ | +| com.mindspore | [Model](https://www.mindspore.cn/lite/api/en/r2.7.2/api_java/model.html) | Model defines model in MindSpore for compiling and running compute graph. | √ | √ | +| com.mindspore.config | [MSContext](https://www.mindspore.cn/lite/api/en/r2.7.2/api_java/mscontext.html) | MSContext is used to save the context during execution. | √ | √ | +| com.mindspore | [MSTensor](https://www.mindspore.cn/lite/api/en/r2.7.2/api_java/mstensor.html) | MSTensor defines the tensor in MindSpore. | √ | √ | +| com.mindspore | [ModelParallelRunner](https://www.mindspore.cn/lite/api/en/r2.7.2/api_java/model_parallel_runner.html) | Defines MindSpore Lite concurrent inference. | √ | ✕ | +| com.mindspore.config | [RunnerConfig](https://www.mindspore.cn/lite/api/en/r2.7.2/api_java/runner_config.html) | RunnerConfig defines configuration parameters for concurrent inference. | √ | ✕ | +| com.mindspore | [Graph](https://www.mindspore.cn/lite/api/en/r2.7.2/api_java/graph.html) | Graph defines the compute graph in MindSpore. | ✕ | √ | +| com.mindspore.config | [CpuBindMode](https://www.mindspore.cn/lite/api/en/r2.7.2/api_java/mscontext.html#cpubindmode) | CpuBindMode defines the CPU binding mode. | √ | √ | +| com.mindspore.config | [DeviceType](https://www.mindspore.cn/lite/api/en/r2.7.2/api_java/mscontext.html#devicetype) | DeviceType defines the back-end device type. | √ | √ | +| com.mindspore.config | [DataType](https://www.mindspore.cn/lite/api/en/r2.7.2/api_java/mstensor.html#datatype) | DataType defines the supported data types. | √ | √ | +| com.mindspore.config | [Version](https://www.mindspore.cn/lite/api/en/r2.7.2/api_java/version.html) | Version is used to obtain the version information of MindSpore. | ✕ | √ | +| com.mindspore.config | [ModelType](https://www.mindspore.cn/lite/api/en/r2.7.2/api_java/model.html#modeltype) | ModelType defines the model file type. | √ | √ | +| com.mindspore.config | [AscendDeviceInfo](https://www.mindspore.cn/lite/api/en/r2.7.2/api_java/ascend_device_info.html) | The AscendDeviceInfo class is used to configure MindSpore Lite Ascend device options. | √ | ✕ | +| com.mindspore.config | [TrainCfg](https://www.mindspore.cn/lite/api/en/r2.7.2/api_java/train_cfg.html) | Configuration parameters used for model training on the device. | ✕ | √ | diff --git a/docs/lite/api/source_en/api_java/graph.md b/docs/lite/api/source_en/api_java/graph.md index 0b2f7f37ff..5e9330d91f 100644 --- a/docs/lite/api/source_en/api_java/graph.md +++ b/docs/lite/api/source_en/api_java/graph.md @@ -1,6 +1,6 @@ # Graph -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_en/api_java/graph.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_en/api_java/graph.md) ```java import com.mindspore.Graph; diff --git a/docs/lite/api/source_en/api_java/lite_java_example.rst b/docs/lite/api/source_en/api_java/lite_java_example.rst index 9699614ba8..2ab29dd0a1 100644 --- a/docs/lite/api/source_en/api_java/lite_java_example.rst +++ b/docs/lite/api/source_en/api_java/lite_java_example.rst @@ -4,6 +4,6 @@ Example .. toctree:: :maxdepth: 1 - Simple Demo↗ - Android Application Development Based on Java Interface↗ - High-level Usage↗ \ No newline at end of file + Simple Demo↗ + Android Application Development Based on Java Interface↗ + High-level Usage↗ \ No newline at end of file diff --git a/docs/lite/api/source_en/api_java/model.md b/docs/lite/api/source_en/api_java/model.md index 6908478264..a66738ee85 100644 --- a/docs/lite/api/source_en/api_java/model.md +++ b/docs/lite/api/source_en/api_java/model.md @@ -1,6 +1,6 @@ # Model -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_en/api_java/model.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_en/api_java/model.md) ```java import com.mindspore.model; diff --git a/docs/lite/api/source_en/api_java/model_parallel_runner.md b/docs/lite/api/source_en/api_java/model_parallel_runner.md index 5f0186749f..6c8a553437 100644 --- a/docs/lite/api/source_en/api_java/model_parallel_runner.md +++ b/docs/lite/api/source_en/api_java/model_parallel_runner.md @@ -1,6 +1,6 @@ # ModelParallelRunner -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_en/api_java/model_parallel_runner.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_en/api_java/model_parallel_runner.md) ```java import com.mindspore.config.RunnerConfig; diff --git a/docs/lite/api/source_en/api_java/mscontext.md b/docs/lite/api/source_en/api_java/mscontext.md index 8370a865b8..81949c20f5 100644 --- a/docs/lite/api/source_en/api_java/mscontext.md +++ b/docs/lite/api/source_en/api_java/mscontext.md @@ -1,6 +1,6 @@ # MSContext -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_en/api_java/mscontext.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_en/api_java/mscontext.md) ```java import com.mindspore.config.MSContext; @@ -54,7 +54,7 @@ Initialize MSContext for cpu. - Parameters - `threadNum`: Thread number config for thread pool. - - `cpuBindMode`: A **[CpuBindMode](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/java/src/main/java/com/mindspore/config/CpuBindMode.java)** **enum** variable. + - `cpuBindMode`: A **[CpuBindMode](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/java/src/main/java/com/mindspore/config/CpuBindMode.java)** **enum** variable. - Returns @@ -69,7 +69,7 @@ Initialize MSContext. - Parameters - `threadNum`: Thread number config for thread pool. - - `cpuBindMode`: A **[CpuBindMode](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/java/src/main/java/com/mindspore/config/CpuBindMode.java)** **enum** variable. + - `cpuBindMode`: A **[CpuBindMode](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/java/src/main/java/com/mindspore/config/CpuBindMode.java)** **enum** variable. - `isEnableParallel`: Whether to enable parallel in different device. - Returns @@ -86,7 +86,7 @@ Add device info for mscontext. - Parameters - - `deviceType`: A **[DeviceType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/java/src/main/java/com/mindspore/config/DeviceType.java)** **enum** type. + - `deviceType`: A **[DeviceType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/java/src/main/java/com/mindspore/config/DeviceType.java)** **enum** type. - `isEnableFloat16`: Whether to enable fp16. - Returns @@ -101,7 +101,7 @@ Add device info for mscontext. - Parameters - - `deviceType`: A **[DeviceType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/java/src/main/java/com/mindspore/config/DeviceType.java)** **enum** type. + - `deviceType`: A **[DeviceType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/java/src/main/java/com/mindspore/config/DeviceType.java)** **enum** type. - `isEnableFloat16`: is enable fp16. - `npuFreq`: Npu frequency. diff --git a/docs/lite/api/source_en/api_java/mstensor.md b/docs/lite/api/source_en/api_java/mstensor.md index 7b06075e1d..3234d84bc7 100644 --- a/docs/lite/api/source_en/api_java/mstensor.md +++ b/docs/lite/api/source_en/api_java/mstensor.md @@ -1,6 +1,6 @@ # MSTensor -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_en/api_java/mstensor.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_en/api_java/mstensor.md) ```java import com.mindspore.MSTensor; @@ -86,7 +86,7 @@ Get the shape of the MindSpore MSTensor. public int getDataType() ``` -DataType is defined in [com.mindspore.DataType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/java/src/main/java/com/mindspore/config/DataType.java). +DataType is defined in [com.mindspore.DataType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/java/src/main/java/com/mindspore/config/DataType.java). - Returns diff --git a/docs/lite/api/source_en/api_java/runner_config.md b/docs/lite/api/source_en/api_java/runner_config.md index f21c1f0986..c5c8f16b66 100644 --- a/docs/lite/api/source_en/api_java/runner_config.md +++ b/docs/lite/api/source_en/api_java/runner_config.md @@ -1,6 +1,6 @@ # RunnerConfig -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_en/api_java/runner_config.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_en/api_java/runner_config.md) RunnerConfig defines the configuration parameters of MindSpore Lite concurrent inference. diff --git a/docs/lite/api/source_en/api_java/train_cfg.md b/docs/lite/api/source_en/api_java/train_cfg.md index 1c16a767e3..8980944bea 100644 --- a/docs/lite/api/source_en/api_java/train_cfg.md +++ b/docs/lite/api/source_en/api_java/train_cfg.md @@ -1,6 +1,6 @@ # TrainCfg -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_en/api_java/train_cfg.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_en/api_java/train_cfg.md) ```java import com.mindspore.config.TrainCfg; diff --git a/docs/lite/api/source_en/api_java/version.md b/docs/lite/api/source_en/api_java/version.md index 32299b70f8..72b78c1f7c 100644 --- a/docs/lite/api/source_en/api_java/version.md +++ b/docs/lite/api/source_en/api_java/version.md @@ -1,6 +1,6 @@ # Version -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_en/api_java/version.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_en/api_java/version.md) ```java import com.mindspore.config.Version; diff --git a/docs/lite/api/source_en/index.rst b/docs/lite/api/source_en/index.rst index cebb66ccb0..404b742f39 100644 --- a/docs/lite/api/source_en/index.rst +++ b/docs/lite/api/source_en/index.rst @@ -12,21 +12,21 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Class | Description | C++ API | Python API | +=========================================================+===================================================================================================================================+==========================================================================================================================================================================================================================+============================================================================================================================================================================================================================================================================================================================================================================+ -| Context | Set the number of threads at runtime | void SetThreadNum(int32_t thread_num) | `Context.cpu.thread_num `__ | +| Context | Set the number of threads at runtime | void SetThreadNum(int32_t thread_num) | `Context.cpu.thread_num `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | Get the current thread number setting | int32_t GetThreadNum() const | `Context.cpu.thread_num `__ | +| Context | Get the current thread number setting | int32_t GetThreadNum() const | `Context.cpu.thread_num `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | Set the parallel number of operators at runtime | void SetInterOpParallelNum(int32_t parallel_num) | `Context.cpu.inter_op_parallel_num `__ | +| Context | Set the parallel number of operators at runtime | void SetInterOpParallelNum(int32_t parallel_num) | `Context.cpu.inter_op_parallel_num `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | Get the current operators parallel number setting | int32_t GetInterOpParallelNum() const | `Context.cpu.inter_op_parallel_num `__ | +| Context | Get the current operators parallel number setting | int32_t GetInterOpParallelNum() const | `Context.cpu.inter_op_parallel_num `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | Set the thread affinity to CPU cores | void SetThreadAffinity(int mode) | `Context.cpu.thread_affinity_mode `__ | +| Context | Set the thread affinity to CPU cores | void SetThreadAffinity(int mode) | `Context.cpu.thread_affinity_mode `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | Get the thread affinity of CPU cores | int GetThreadAffinityMode() const | `Context.cpu.thread_affinity_mode `__ | +| Context | Get the thread affinity of CPU cores | int GetThreadAffinityMode() const | `Context.cpu.thread_affinity_mode `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | Set the thread lists to CPU cores | void SetThreadAffinity(const std::vector &core_list) | `Context.cpu.thread_affinity_core_list `__ | +| Context | Set the thread lists to CPU cores | void SetThreadAffinity(const std::vector &core_list) | `Context.cpu.thread_affinity_core_list `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | Get the thread lists of CPU cores | std::vector GetThreadAffinityCoreList() const | `Context.cpu.thread_affinity_core_list `__ | +| Context | Get the thread lists of CPU cores | std::vector GetThreadAffinityCoreList() const | `Context.cpu.thread_affinity_core_list `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | Set the status whether to perform model inference or training in parallel | void SetEnableParallel(bool is_parallel) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -44,7 +44,7 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | Get the mode of the model run | bool GetMultiModalHW() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | Get a mutable reference of DeviceInfoContext vector in this context | std::vector> &MutableDeviceInfo() | Wrapped in `Context.target `__ | +| Context | Get a mutable reference of DeviceInfoContext vector in this context | std::vector> &MutableDeviceInfo() | Wrapped in `Context.target `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | DeviceInfoContext | Get the type of this DeviceInfoContext | enum DeviceType GetDeviceType() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -62,29 +62,29 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | DeviceInfoContext | obtain memory allocator | std::shared_ptr GetAllocator() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| CPUDeviceInfo | Get the type of this DeviceInfoContext | enum DeviceType GetDeviceType() const | `context.cpu `__ | +| CPUDeviceInfo | Get the type of this DeviceInfoContext | enum DeviceType GetDeviceType() const | `context.cpu `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| CPUDeviceInfo | Set enables to perform the float16 inference | void SetEnableFP16(bool is_fp16) | `Context.cpu.precision_mode `__ | +| CPUDeviceInfo | Set enables to perform the float16 inference | void SetEnableFP16(bool is_fp16) | `Context.cpu.precision_mode `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| CPUDeviceInfo | Get enables to perform the float16 inference | bool GetEnableFP16() const | `Context.cpu.precision_mode `__ | +| CPUDeviceInfo | Get enables to perform the float16 inference | bool GetEnableFP16() const | `Context.cpu.precision_mode `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | Get the type of this DeviceInfoContext | enum DeviceType GetDeviceType() const | `Context.gpu `__ | +| GPUDeviceInfo | Get the type of this DeviceInfoContext | enum DeviceType GetDeviceType() const | `Context.gpu `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | Set device id | void SetDeviceID(uint32_t device_id) | `Context.gpu.device_id `__ | +| GPUDeviceInfo | Set device id | void SetDeviceID(uint32_t device_id) | `Context.gpu.device_id `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | Get the device id | uint32_t GetDeviceID() const | `Context.gpu.device_id `__ | +| GPUDeviceInfo | Get the device id | uint32_t GetDeviceID() const | `Context.gpu.device_id `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | Get the distribution rank id | int GetRankID() const | `Context.gpu.rank_id `__ | +| GPUDeviceInfo | Get the distribution rank id | int GetRankID() const | `Context.gpu.rank_id `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | Get the distribution group size | int GetGroupSize() const | `Context.gpu.group_size `__ | +| GPUDeviceInfo | Get the distribution group size | int GetGroupSize() const | `Context.gpu.group_size `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | Set the precision mode | void SetPrecisionMode(const std::string &precision_mode) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | Get the precision mode | std::string GetPrecisionMode() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | Set enables to perform the float16 inference | void SetEnableFP16(bool is_fp16) | `Context.gpu.precision_mode `__ | +| GPUDeviceInfo | Set enables to perform the float16 inference | void SetEnableFP16(bool is_fp16) | `Context.gpu.precision_mode `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | Get enables to perform the float16 inference | bool GetEnableFP16() const | `Context.gpu.precision_mode `__ | +| GPUDeviceInfo | Get enables to perform the float16 inference | bool GetEnableFP16() const | `Context.gpu.precision_mode `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | Set enables to sharing mem with OpenGL | void SetEnableGLTexture(bool is_enable_gl_texture) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -98,11 +98,11 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | Get current OpenGL display | void \*GetGLDisplay() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | Get the type of this DeviceInfoContext | enum DeviceType GetDeviceType() const | `Context.ascend `__ | +| AscendDeviceInfo | Get the type of this DeviceInfoContext | enum DeviceType GetDeviceType() const | `Context.ascend `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | Set device id | void SetDeviceID(uint32_t device_id) | `Context.ascend.device_id `__ | +| AscendDeviceInfo | Set device id | void SetDeviceID(uint32_t device_id) | `Context.ascend.device_id `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | Get the device id | uint32_t GetDeviceID() const | `Context.ascend.device_id `__ | +| AscendDeviceInfo | Get the device id | uint32_t GetDeviceID() const | `Context.ascend.device_id `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | Set AIPP configuration file path | void SetInsertOpConfigPath(const std::string &cfg_path) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -132,9 +132,9 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | Get type of model outputs | enum DataType GetOutputType() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | Set precision mode of model | void SetPrecisionMode(const std::string &precision_mode) | `Context.ascend.precision_mode `__ | +| AscendDeviceInfo | Set precision mode of model | void SetPrecisionMode(const std::string &precision_mode) | `Context.ascend.precision_mode `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | Get precision mode of model | std::string GetPrecisionMode() const | `Context.ascend.precision_mode `__ | +| AscendDeviceInfo | Get precision mode of model | std::string GetPrecisionMode() const | `Context.ascend.precision_mode `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | Set op select implementation mode | void SetOpSelectImplMode(const std::string &op_select_impl_mode) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -160,7 +160,7 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Build a model from model buffer so that it can run on a device | Status Build(const void \*model_data, size_t data_size, ModelType model_type, const std::shared_ptr &model_context = nullptr) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | Load and build a model from model buffer so that it can run on a device | Status Build(const std::string &model_path, ModelType model_type, const std::shared_ptr &model_context = nullptr) | `Model.build_from_file `__ | +| Model | Load and build a model from model buffer so that it can run on a device | Status Build(const std::string &model_path, ModelType model_type, const std::shared_ptr &model_context = nullptr) | `Model.build_from_file `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Build a model from model buffer so that it can run on a device | Status Build(const void \*model_data, size_t data_size, ModelType model_type, const std::shared_ptr &model_context, const Key &dec_key, const std::string &dec_mode, const std::string &cropto_lib_path) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -172,11 +172,11 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Build a Transfer Learning model where the backbone weights are fixed and the head weights are trainable | Status BuildTransferLearning(GraphCell backbone, GraphCell head, const std::shared_ptr &context, const std::shared_ptr &train_cfg = nullptr) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | Resize the shapes of inputs | Status Resize(const std::vector &inputs, const std::vector > &dims) | `Model.resize `__ | +| Model | Resize the shapes of inputs | Status Resize(const std::vector &inputs, const std::vector > &dims) | `Model.resize `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Change the size and or content of weight tensors | Status UpdateWeights(const std::vector &new_weights) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | Inference model API | Status Predict(const std::vector &inputs, std::vector \*outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.predict `__ | +| Model | Inference model API | Status Predict(const std::vector &inputs, std::vector \*outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.predict `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Inference model API only with callback | Status Predict(const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -188,11 +188,11 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Check if data preprocess exists in model | bool HasPreprocess() | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | Load config file | Status LoadConfig(const std::string &config_path) | Wrapped in the parameter `config_path` of `Model.build_from_file `__ | +| Model | Load config file | Status LoadConfig(const std::string &config_path) | Wrapped in the parameter `config_path` of `Model.build_from_file `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Update config | Status UpdateConfig(const std::string §ion, const std::pair &config) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | Obtains all input tensors of the model | std::vector GetInputs() | `Model.get_inputs `__ | +| Model | Obtains all input tensors of the model | std::vector GetInputs() | `Model.get_inputs `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Obtains the input tensor of the model by name | MSTensor GetInputByTensorName(const std::string &tensor_name) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -220,7 +220,7 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Accessor to TrainLoop metric objects | std::vector GetMetrics() | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | Obtains all output tensors of the model | std::vector GetOutputs() | Wrapped in the return value of `Model.predict `__ | +| Model | Obtains all output tensors of the model | std::vector GetOutputs() | Wrapped in the return value of `Model.predict `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Obtains names of all output tensors of the model | std::vector GetOutputTensorNames() | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -240,33 +240,33 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | Check if the device supports the model | static bool CheckModelSupport(enum DeviceType device_type, ModelType model_type) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | Set the number of workers at runtime | void SetWorkersNum(int32_t workers_num) | `Context.parallel.workers_num `__ | +| RunnerConfig | Set the number of workers at runtime | void SetWorkersNum(int32_t workers_num) | `Context.parallel.workers_num `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | Get the current operators parallel workers number setting | int32_t GetWorkersNum() const | `Context.parallel.workers_num `__ | +| RunnerConfig | Get the current operators parallel workers number setting | int32_t GetWorkersNum() const | `Context.parallel.workers_num `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | Set the context at runtime | void SetContext(const std::shared_ptr &context) | Wrapped in `Context.parallel `__ | +| RunnerConfig | Set the context at runtime | void SetContext(const std::shared_ptr &context) | Wrapped in `Context.parallel `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | Get the current context setting | std::shared_ptr GetContext() const | Wrapped in `Context.parallel `__ | +| RunnerConfig | Get the current context setting | std::shared_ptr GetContext() const | Wrapped in `Context.parallel `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | Set the config before runtime | void SetConfigInfo(const std::string §ion, const std::map &config) | `Context.parallel.config_info `__ | +| RunnerConfig | Set the config before runtime | void SetConfigInfo(const std::string §ion, const std::map &config) | `Context.parallel.config_info `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | Get the current config setting | std::map> GetConfigInfo() const | `Context.parallel.config_info `__ | +| RunnerConfig | Get the current config setting | std::map> GetConfigInfo() const | `Context.parallel.config_info `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | Set the config path before runtime | void SetConfigPath(const std::string &config_path) | `Context.parallel.config_path `__ | +| RunnerConfig | Set the config path before runtime | void SetConfigPath(const std::string &config_path) | `Context.parallel.config_path `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | Get the current config path | std::string GetConfigPath() const | `Context.parallel.config_path `__ | +| RunnerConfig | Get the current config path | std::string GetConfigPath() const | `Context.parallel.config_path `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelParallelRunner | build a model parallel runner from model path so that it can run on a device | Status Init(const std::string &model_path, const std::shared_ptr &runner_config = nullptr) | `Model.parallel_runner.build_from_file `__ | +| ModelParallelRunner | build a model parallel runner from model path so that it can run on a device | Status Init(const std::string &model_path, const std::shared_ptr &runner_config = nullptr) | `Model.parallel_runner.build_from_file `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelParallelRunner | build a model parallel runner from model buffer so that it can run on a device | Status Init(const void \*model_data, const size_t data_size, const std::shared_ptr &runner_config = nullptr) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelParallelRunner | Obtains all input tensors information of the model | std::vector GetInputs() | `Model.parallel_runner.get_inputs `__ | +| ModelParallelRunner | Obtains all input tensors information of the model | std::vector GetInputs() | `Model.parallel_runner.get_inputs `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelParallelRunner | Obtains all output tensors information of the model | std::vector GetOutputs() | Wrapped in the return value of `Model.parallel_runner.predict `__ | +| ModelParallelRunner | Obtains all output tensors information of the model | std::vector GetOutputs() | Wrapped in the return value of `Model.parallel_runner.predict `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelParallelRunner | Inference ModelParallelRunner | Status Predict(const std::vector &inputs, std::vector \*outputs,const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.parallel_runner.predict `__ | +| ModelParallelRunner | Inference ModelParallelRunner | Status Predict(const std::vector &inputs, std::vector \*outputs,const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.parallel_runner.predict `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Creates a MSTensor object, whose data need to be copied before accessed by Model | static inline MSTensor \*CreateTensor(const std::string &name, DataType type, const std::vector &shape, const void \*data, size_t data_len) noexcept | `Tensor `__ | +| MSTensor | Creates a MSTensor object, whose data need to be copied before accessed by Model | static inline MSTensor \*CreateTensor(const std::string &name, DataType type, const std::vector &shape, const void \*data, size_t data_len) noexcept | `Tensor `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Creates a MSTensor object, whose data can be directly accessed by Model | static inline MSTensor \*CreateRefTensor(const std::string &name, DataType type, const std::vector &shape, const void \*data, size_t data_len, bool own_data = true) noexcept | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -280,19 +280,19 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Destroy an object created by `Clone` , `StringsToTensor` , `CreateRefTensor` or `CreateTensor` | static void DestroyTensorPtr(MSTensor \*tensor) noexcept | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Obtains the name of the MSTensor | std::string Name() const | `Tensor.name `__ | +| MSTensor | Obtains the name of the MSTensor | std::string Name() const | `Tensor.name `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Obtains the data type of the MSTensor | enum DataType DataType() const | `Tensor.dtype `__ | +| MSTensor | Obtains the data type of the MSTensor | enum DataType DataType() const | `Tensor.dtype `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Obtains the shape of the MSTensor | const std::vector &Shape() const | `Tensor.shape `__ | +| MSTensor | Obtains the shape of the MSTensor | const std::vector &Shape() const | `Tensor.shape `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Obtains the number of elements of the MSTensor | int64_t ElementNum() const | `Tensor.element_num `__ | +| MSTensor | Obtains the number of elements of the MSTensor | int64_t ElementNum() const | `Tensor.element_num `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Obtains a shared pointer to the copy of data of the MSTensor | std::shared_ptr Data() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Obtains the pointer to the data of the MSTensor | void \*MutableData() | Wrapped in `Tensor.get_data_to_numpy `__ and `Tensor.set_data_from_numpy `__ | +| MSTensor | Obtains the pointer to the data of the MSTensor | void \*MutableData() | Wrapped in `Tensor.get_data_to_numpy `__ and `Tensor.set_data_from_numpy `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Obtains the length of the data of the MSTensor, in bytes | size_t DataSize() const | `Tensor.data_size `__ | +| MSTensor | Obtains the length of the data of the MSTensor, in bytes | size_t DataSize() const | `Tensor.data_size `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Get whether the MSTensor data is const data | bool IsConst() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -308,19 +308,19 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Get the boolean value that indicates whether the MSTensor not equals tensor | bool operator!=(const MSTensor &tensor) const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Set the shape of for the MSTensor | void SetShape(const std::vector &shape) | `Tensor.shape `__ | +| MSTensor | Set the shape of for the MSTensor | void SetShape(const std::vector &shape) | `Tensor.shape `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Set the data type for the MSTensor | void SetDataType(enum DataType data_type) | `Tensor.dtype `__ | +| MSTensor | Set the data type for the MSTensor | void SetDataType(enum DataType data_type) | `Tensor.dtype `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Set the name for the MSTensor | void SetTensorName(const std::string &name) | `Tensor.name `__ | +| MSTensor | Set the name for the MSTensor | void SetTensorName(const std::string &name) | `Tensor.name `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Set the Allocator for the MSTensor | void SetAllocator(std::shared_ptr allocator) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Obtain the Allocator of the MSTensor | std::shared_ptr allocator() const | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Set the format for the MSTensor | void SetFormat(mindspore::Format format) | `Tensor.format `__ | +| MSTensor | Set the format for the MSTensor | void SetFormat(mindspore::Format format) | `Tensor.format `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | Obtain the format of the MSTensor | mindspore::Format format() const | `Tensor.format `__ | +| MSTensor | Obtain the format of the MSTensor | mindspore::Format format() const | `Tensor.format `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Set the data for the MSTensor | void SetData(void \*data, bool own_data = true) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -332,15 +332,15 @@ Summary of MindSpore Lite API support +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | Set the quantization parameters for the MSTensor | void SetQuantParams(std::vector quant_params) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelGroup | Construct a ModelGroup object and indicate shared workspace memory or shared weight memory, with default shared workspace memory | ModelGroup(ModelGroupFlag flags = ModelGroupFlag::kShareWorkspace) | `ModelGroup `__ | +| ModelGroup | Construct a ModelGroup object and indicate shared workspace memory or shared weight memory, with default shared workspace memory | ModelGroup(ModelGroupFlag flags = ModelGroupFlag::kShareWorkspace) | `ModelGroup `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelGroup | When sharing weight memory, add model objects that require shared weight memory | Status AddModel(const std::vector &model_list) | `ModelGroup.add_model `__ | +| ModelGroup | When sharing weight memory, add model objects that require shared weight memory | Status AddModel(const std::vector &model_list) | `ModelGroup.add_model `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelGroup | When sharing workspace memory, add the path of the model that requires shared workspace memory | Status AddModel(const std::vector &model_path_list) | `ModelGroup.add_model `__ | +| ModelGroup | When sharing workspace memory, add the path of the model that requires shared workspace memory | Status AddModel(const std::vector &model_path_list) | `ModelGroup.add_model `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelGroup | When sharing workspace memory, add a model buffer that requires shared workspace memory | Status AddModel(const std::vector> &model_buff_list) | | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelGroup | When sharing workspace memory, calculate the maximum workspace memory size | Status CalMaxSizeOfWorkspace(ModelType model_type, const std::shared_ptr &ms_context) | `ModelGroup.cal_max_size_of_workspace `__ | +| ModelGroup | When sharing workspace memory, calculate the maximum workspace memory size | Status CalMaxSizeOfWorkspace(ModelType model_type, const std::shared_ptr &ms_context) | `ModelGroup.cal_max_size_of_workspace `__ | +---------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/docs/lite/api/source_zh_cn/api_c/context_c.md b/docs/lite/api/source_zh_cn/api_c/context_c.md index 47e823ce6d..132488fa83 100644 --- a/docs/lite/api/source_zh_cn/api_c/context_c.md +++ b/docs/lite/api/source_zh_cn/api_c/context_c.md @@ -1,6 +1,6 @@ # context_c -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_c/context_c.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_c/context_c.md) ```c #include @@ -198,7 +198,7 @@ MSDeviceInfoHandle MSDeviceInfoCreate(MSDeviceType device_type) 新建运行设备信息,若创建失败则会返回`nullptr`,并在日志中输出信息。 - 参数 - - `device_type`: 设备类型,具体见[MSDeviceType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_c/types_c.html#msdevicetype)。 + - `device_type`: 设备类型,具体见[MSDeviceType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_c/types_c.html#msdevicetype)。 - 返回值 diff --git a/docs/lite/api/source_zh_cn/api_c/data_type_c.md b/docs/lite/api/source_zh_cn/api_c/data_type_c.md index 7d66a2db36..c4678a378f 100644 --- a/docs/lite/api/source_zh_cn/api_c/data_type_c.md +++ b/docs/lite/api/source_zh_cn/api_c/data_type_c.md @@ -1,6 +1,6 @@ # data_type_c -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_c/data_type_c.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_c/data_type_c.md) ```C #include diff --git a/docs/lite/api/source_zh_cn/api_c/format_c.md b/docs/lite/api/source_zh_cn/api_c/format_c.md index 29b7dc2bef..82f5351800 100644 --- a/docs/lite/api/source_zh_cn/api_c/format_c.md +++ b/docs/lite/api/source_zh_cn/api_c/format_c.md @@ -1,6 +1,6 @@ # format_c -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_c/format_c.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_c/format_c.md) ```C #include diff --git a/docs/lite/api/source_zh_cn/api_c/lite_c_example.rst b/docs/lite/api/source_zh_cn/api_c/lite_c_example.rst index 32a216e74a..d7741e3994 100644 --- a/docs/lite/api/source_zh_cn/api_c/lite_c_example.rst +++ b/docs/lite/api/source_zh_cn/api_c/lite_c_example.rst @@ -4,4 +4,4 @@ .. toctree:: :maxdepth: 1 - 极简Demo↗ + 极简Demo↗ diff --git a/docs/lite/api/source_zh_cn/api_c/model_c.md b/docs/lite/api/source_zh_cn/api_c/model_c.md index b1e17721fc..acc48d1ae5 100644 --- a/docs/lite/api/source_zh_cn/api_c/model_c.md +++ b/docs/lite/api/source_zh_cn/api_c/model_c.md @@ -1,6 +1,6 @@ # model_c -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_c/model_c.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_c/model_c.md) ```C #include @@ -145,8 +145,8 @@ MSStatus MSModelBuild(MSModelHandle model, const void* model_data, size_t data_s - `model`: 指向模型对象的指针。 - `model_data`: 内存中已经加载的模型数据地址。 - `data_size`: 模型数据的长度。 - - `model_type`: 模型文件类型,具体见: [MSModelType](https://mindspore.cn/lite/api/zh-CN/r2.7.1/api_c/types_c.html#msmodeltype)。 - - `model_context`: 模型的上下文环境,具体见: [Context](https://mindspore.cn/lite/api/zh-CN/r2.7.1/api_c/context_c.html)。 + - `model_type`: 模型文件类型,具体见: [MSModelType](https://mindspore.cn/lite/api/zh-CN/r2.7.2/api_c/types_c.html#msmodeltype)。 + - `model_context`: 模型的上下文环境,具体见: [Context](https://mindspore.cn/lite/api/zh-CN/r2.7.2/api_c/context_c.html)。 - 返回值 @@ -165,8 +165,8 @@ MSStatus MSModelBuildFromFile(MSModelHandle model, const char* model_path, MSMod - `model`: 指向模型对象的指针。 - `model_path`: 模型文件路径。 - - `model_type`: 模型文件类型,具体见: [MSModelType](https://mindspore.cn/lite/api/zh-CN/r2.7.1/api_c/types_c.html#msmodeltype)。 - - `model_context`: 模型的上下文环境,具体见: [Context](https://mindspore.cn/lite/api/zh-CN/r2.7.1/api_c/context_c.html)。 + - `model_type`: 模型文件类型,具体见: [MSModelType](https://mindspore.cn/lite/api/zh-CN/r2.7.2/api_c/types_c.html#msmodeltype)。 + - `model_context`: 模型的上下文环境,具体见: [Context](https://mindspore.cn/lite/api/zh-CN/r2.7.2/api_c/context_c.html)。 - 返回值 diff --git a/docs/lite/api/source_zh_cn/api_c/status_c.md b/docs/lite/api/source_zh_cn/api_c/status_c.md index 7e85a39309..c0431835de 100644 --- a/docs/lite/api/source_zh_cn/api_c/status_c.md +++ b/docs/lite/api/source_zh_cn/api_c/status_c.md @@ -1,6 +1,6 @@ # status_c -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_c/status_c.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_c/status_c.md) ```C #include diff --git a/docs/lite/api/source_zh_cn/api_c/tensor_c.md b/docs/lite/api/source_zh_cn/api_c/tensor_c.md index 781fb1064a..4537b93d99 100644 --- a/docs/lite/api/source_zh_cn/api_c/tensor_c.md +++ b/docs/lite/api/source_zh_cn/api_c/tensor_c.md @@ -1,6 +1,6 @@ # tensor_c -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_c/tensor_c.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_c/tensor_c.md) ```C #include @@ -123,7 +123,7 @@ void MSTensorSetDataType(MSTensorHandle tensor, MSDataType type) MSDataType MSTensorGetDataType(const MSTensorHandle tensor) ``` -获取MSTensor的数据类型,具体数据类型见[MSDataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_c/data_type_c.html#msdatatype)。 +获取MSTensor的数据类型,具体数据类型见[MSDataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_c/data_type_c.html#msdatatype)。 - 参数 - `tensor`: 指向MSTensor的指针。 @@ -171,7 +171,7 @@ void MSTensorSetFormat(MSTensorHandle tensor, MSFormat format) - 参数 - `tensor`: 指向MSTensor的指针。 - - `format`: 张量的数据排列,具体见[MSFormat](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_c/format_c.html#msformat)。 + - `format`: 张量的数据排列,具体见[MSFormat](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_c/format_c.html#msformat)。 ### MSTensorGetFormat @@ -183,7 +183,7 @@ MSFormat MSTensorGetFormat(const MSTensorHandle tensor) - 返回值 - 张量的数据排列,具体见[MSFormat](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_c/format_c.html#msformat)。 + 张量的数据排列,具体见[MSFormat](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_c/format_c.html#msformat)。 ### MSTensorSetData diff --git a/docs/lite/api/source_zh_cn/api_c/types_c.md b/docs/lite/api/source_zh_cn/api_c/types_c.md index 3fa48aaffc..9e0c5c4cdc 100644 --- a/docs/lite/api/source_zh_cn/api_c/types_c.md +++ b/docs/lite/api/source_zh_cn/api_c/types_c.md @@ -1,6 +1,6 @@ # types_c -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_c/types_c.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_c/types_c.md) ```C #include diff --git a/docs/lite/api/source_zh_cn/api_cpp/lite_cpp_example.rst b/docs/lite/api/source_zh_cn/api_cpp/lite_cpp_example.rst index edaaf3e8b2..561762ee28 100644 --- a/docs/lite/api/source_zh_cn/api_cpp/lite_cpp_example.rst +++ b/docs/lite/api/source_zh_cn/api_cpp/lite_cpp_example.rst @@ -4,6 +4,6 @@ .. toctree:: :maxdepth: 1 - 极简Demo↗ - 基于JNI接口的Android应用开发↗ - 高阶用法↗ \ No newline at end of file + 极简Demo↗ + 基于JNI接口的Android应用开发↗ + 高阶用法↗ \ No newline at end of file diff --git a/docs/lite/api/source_zh_cn/api_cpp/mindspore.md b/docs/lite/api/source_zh_cn/api_cpp/mindspore.md index ac1029ed08..d78f19bf76 100644 --- a/docs/lite/api/source_zh_cn/api_cpp/mindspore.md +++ b/docs/lite/api/source_zh_cn/api_cpp/mindspore.md @@ -1,6 +1,6 @@ # mindspore -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_cpp/mindspore.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_cpp/mindspore.md) ## 接口汇总 @@ -38,8 +38,8 @@ |--------------------------------------------------|---------------------------------------------------|--------|--------| | [MSTensor](#mstensor) | MindSpore中的张量。 | √ | √ | | [QuantParam](#quantparam) | MSTensor中的一组量化参数。 | √ | √ | -| [DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_datatype.html) | MindSpore MSTensor保存的数据支持的类型。 | √ | √ | -| [Format](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_format.html) | MindSpore MSTensor保存的数据支持的排列格式。 | √ | √ | +| [DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_datatype.html) | MindSpore MSTensor保存的数据支持的类型。 | √ | √ | +| [Format](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_format.html) | MindSpore MSTensor保存的数据支持的排列格式。 | √ | √ | | [Allocator](#allocator-1) | 内存管理基类。 | √ | √ | ### 模型分组 @@ -157,9 +157,9 @@ Context的数据。 | [bool GetEnableParallel() const](#getenableparallel) | ✕ | √ | | [void SetBuiltInDelegate(DelegateMode mode)](#setbuiltindelegate) | ✕ | √ | | [DelegateMode GetBuiltInDelegate() const](#getbuiltindelegate) | ✕ | √ | -| [void set_delegate(const std::shared_ptr\ &delegate)](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#set-delegate) | ✕ | √ | +| [void set_delegate(const std::shared_ptr\ &delegate)](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#set-delegate) | ✕ | √ | | [void SetDelegate(const std::shared_ptr\ &delegate)](#setdelegate) | ✕ | √ | -| [std::shared_ptr\ get_delegate() const](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#get-delegate) | ✕ | √ | +| [std::shared_ptr\ get_delegate() const](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#get-delegate) | ✕ | √ | | [std::shared_ptr\ GetDelegate() const](#getdelegate) | ✕ | √ | | [void SetMultiModalHW(bool float_mode)](#setmultimodalhw) | ✕ | √ | | [bool GetMultiModalHW() const](#getmultimodalhw) | ✕ | √ | @@ -2244,7 +2244,7 @@ Status Finalize() ## ModelExecutor -\#include <[multi_model_runner.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/api/multi_model_runner.h)> +\#include <[multi_model_runner.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/api/multi_model_runner.h)> ModelExecutor定义了对Model的封装,用于调度多个Model的推理。 @@ -2326,7 +2326,7 @@ std::vector GetOutputs() const ## MultiModelRunner -\#include <[multi_model_runner.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/api/multi_model_runner.h)> +\#include <[multi_model_runner.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/api/multi_model_runner.h)> MultiModelRunner用于创建包含多个Model的mindir,并提供调度多个模型的方式。 @@ -2612,10 +2612,10 @@ void DestroyTensorPtr(MSTensor *tensor) noexcept | [bool IsConst() const](#isconst) | √ | √ | | [bool IsDevice() const](#isdevice) | √ | ✕ | | [MSTensor *Clone() const](#clone) | √ | √ | -| [bool operator==(std::nullptr_t) const](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#operatorstd-nullptr-t) | √ | √ | -| [bool operator!=(std::nullptr_t) const](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#operatorstd-nullptr-t-1) | √ | √ | -| [bool operator!=(const MSTensor &tensor) const](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#operatorconst-mstensor-tensor) | √ | √ | -| [bool operator==(const MSTensor &tensor) const](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#operatorconst-mstensor-tensor-1) | √ | √ | +| [bool operator==(std::nullptr_t) const](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#operatorstd-nullptr-t) | √ | √ | +| [bool operator!=(std::nullptr_t) const](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#operatorstd-nullptr-t-1) | √ | √ | +| [bool operator!=(const MSTensor &tensor) const](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#operatorconst-mstensor-tensor) | √ | √ | +| [bool operator==(const MSTensor &tensor) const](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#operatorconst-mstensor-tensor-1) | √ | √ | | [void SetShape(const std::vector\ &shape)](#setshape) | √ | √ | | [void SetDataType(enum DataType data_type)](#setdatatype) | √ | √ | | [void SetTensorName(const std::string &name)](#settensorname) | √ | √ | @@ -3176,7 +3176,7 @@ typedef enum { \#include <[delegate.h](https://gitee.com/mindspore/mindspore/blob/v2.7.1/include/api/delegate.h)> -定义了MindSpore Lite [Kernel](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_kernel.html#mindspore-kernel)列表的迭代器。 +定义了MindSpore Lite [Kernel](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_kernel.html#mindspore-kernel)列表的迭代器。 ```cpp using KernelIter = std::vector::iterator @@ -3210,7 +3210,7 @@ DelegateModel(std::vector *kernels, const std::vector *kernels_ ``` -[**Kernel**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_kernel.html#kernel)的列表,保存模型的所有算子。 +[**Kernel**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_kernel.html#kernel)的列表,保存模型的所有算子。 #### inputs_ @@ -3218,7 +3218,7 @@ std::vector *kernels_ const std::vector &inputs_ ``` -[**MSTensor**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#mstensor)的列表,保存这个算子的输入tensor。 +[**MSTensor**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#mstensor)的列表,保存这个算子的输入tensor。 #### outputs_ @@ -3226,7 +3226,7 @@ const std::vector &inputs_ const std::vector &outputs ``` -[**MSTensor**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#mstensor)的列表,保存这个算子的输出tensor。 +[**MSTensor**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#mstensor)的列表,保存这个算子的输出tensor。 #### primitives_ @@ -3234,7 +3234,7 @@ const std::vector &outputs const std::map &primitives_ ``` -[**Kernel**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_kernel.html#kernel)和**schema::Primitive**的Map,保存所有算子的属性。 +[**Kernel**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_kernel.html#kernel)和**schema::Primitive**的Map,保存所有算子的属性。 #### version_ @@ -3326,7 +3326,7 @@ const std::vector &inputs() - 返回值 - [**MSTensor**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#mstensor)的列表。 + [**MSTensor**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#mstensor)的列表。 #### outputs @@ -3338,7 +3338,7 @@ const std::vector &outputs() - 返回值 - [**MSTensor**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#mstensor)的列表。 + [**MSTensor**](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#mstensor)的列表。 #### GetVersion @@ -4221,11 +4221,11 @@ inline Status(const StatusCode code, int line_of_code, const char *file_name, co | [inline std::string GetErrDescription() const](#geterrdescription) | √ | √ | | [inline std::string SetErrDescription(const std::string &err_description)](#seterrdescription) | √ | √ | | [inline void SetStatusMsg(const std::string &status_msg)](#setstatusmsg) | √ | √ | -| [friend std::ostream &operator\<\<(std::ostream &os, const Status &s)](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#operator< +\#include <[converter_context.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/converter_context.h)> **enum**类型变量,定义MindSpore Lite转换支持的框架类型。 @@ -32,7 +32,7 @@ ## ConverterParameters -\#include <[converter_context.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/converter_context.h)> +\#include <[converter_context.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/converter_context.h)> **struct**类型结构体,定义模型解析时的转换参数,用于模型解析时的只读参数。 @@ -47,7 +47,7 @@ struct ConverterParameters { ## ConverterContext -\#include <[converter_context.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/converter_context.h)> +\#include <[converter_context.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/converter_context.h)> 模型转换过程中,基本信息的设置与获取。 @@ -113,7 +113,7 @@ static std::map GetConfigInfo(const std::string §i ## NodeParser -\#include <[node_parser.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/node_parser.h)> +\#include <[node_parser.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/node_parser.h)> op节点的解析基类。 @@ -216,7 +216,7 @@ tflite节点解析接口函数。 ## NodeParserPtr -\#include <[node_parser.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/node_parser.h)> +\#include <[node_parser.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/node_parser.h)> NodeParser类的共享智能指针类型。 @@ -226,7 +226,7 @@ using NodeParserPtr = std::shared_ptr; ## ModelParser -\#include <[model_parser.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/model_parser.h)> +\#include <[model_parser.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/model_parser.h)> 解析原始模型的基类。 @@ -258,7 +258,7 @@ api::FuncGraphPtr Parse(const converter::ConverterParameters &flags); - 参数 - - `flags`: 解析模型时基本信息,具体见[ConverterParameters](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_converter.html#converterparameters)。 + - `flags`: 解析模型时基本信息,具体见[ConverterParameters](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_converter.html#converterparameters)。 - 返回值 diff --git a/docs/lite/api/source_zh_cn/api_cpp/mindspore_datatype.md b/docs/lite/api/source_zh_cn/api_cpp/mindspore_datatype.md index 398d3d98d0..bbb5e52529 100644 --- a/docs/lite/api/source_zh_cn/api_cpp/mindspore_datatype.md +++ b/docs/lite/api/source_zh_cn/api_cpp/mindspore_datatype.md @@ -1,6 +1,6 @@ # DataType -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_cpp/mindspore_datatype.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_cpp/mindspore_datatype.md) 以下表格描述了MindSpore MSTensor保存的数据支持的类型。 diff --git a/docs/lite/api/source_zh_cn/api_cpp/mindspore_format.md b/docs/lite/api/source_zh_cn/api_cpp/mindspore_format.md index 7212e18abd..d635a3e52a 100644 --- a/docs/lite/api/source_zh_cn/api_cpp/mindspore_format.md +++ b/docs/lite/api/source_zh_cn/api_cpp/mindspore_format.md @@ -1,6 +1,6 @@ # mindspore::Format -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_cpp/mindspore_format.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_cpp/mindspore_format.md) 以下表格描述了MindSpore MSTensor保存的数据支持的排列格式。 diff --git a/docs/lite/api/source_zh_cn/api_cpp/mindspore_kernel.md b/docs/lite/api/source_zh_cn/api_cpp/mindspore_kernel.md index 94c57c6619..3e4e4aa0a4 100644 --- a/docs/lite/api/source_zh_cn/api_cpp/mindspore_kernel.md +++ b/docs/lite/api/source_zh_cn/api_cpp/mindspore_kernel.md @@ -1,6 +1,6 @@ # mindspore::kernel -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_cpp/mindspore_kernel.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_cpp/mindspore_kernel.md) ## 接口汇总 @@ -32,13 +32,13 @@ Kernel的默认与带参构造函数,构造Kernel实例。 - 参数 - - `inputs`: 算子输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#mstensor)。 + - `inputs`: 算子输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#mstensor)。 - - `outputs`: 算子输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#mstensor)。 + - `outputs`: 算子输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#mstensor)。 - `primitive`: 算子经由flatbuffers反序化为Primitive后的结果。 - - `ctx`: 算子的上下文[Context](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#context)。 + - `ctx`: 算子的上下文[Context](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#context)。 ### 析构函数 @@ -59,7 +59,7 @@ virtual int InferShape() ``` 在用户调用`Model::Build`接口时,或是模型推理中需要推理算子形状时,会调用到该接口。 -在自定义算子场景中,用户可以覆写该接口,实现自定义算子的形状推理逻辑。详见[自定义算子章节](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.1/advanced/third_party/register_kernel.html)。 +在自定义算子场景中,用户可以覆写该接口,实现自定义算子的形状推理逻辑。详见[自定义算子章节](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.2/advanced/third_party/register_kernel.html)。 在`InferShape`函数中,一般需要实现算子的形状、数据类型和数据排布的推理逻辑。 - 返回值 @@ -84,7 +84,7 @@ virtual schema::QuantType quant_type() ## KernelInterface -\#include <[kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/kernel_interface.h)> +\#include <[kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/kernel_interface.h)> 算子扩展能力基类。 @@ -117,9 +117,9 @@ virtual Status Infer(std::vector *inputs, std::vector *inputs, std::vector &in_tensors) { th - 参数 - - `in_tensors`: 算子的所有输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#mstensor)列表。 + - `in_tensors`: 算子的所有输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#mstensor)列表。 #### set_input @@ -251,7 +251,7 @@ virtual void set_input(mindspore::MSTensor in_tensor, int index) { this->inputs_ - 参数 - - `in_tensor`: 算子的输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#mstensor)。 + - `in_tensor`: 算子的输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#mstensor)。 - `index`: 算子输入在所有输入中的下标,从0开始计数。 @@ -265,7 +265,7 @@ virtual void set_outputs(const std::vector &out_tensors) { - 参数 - - `out_tensors`: 算子的所有输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#mstensor)列表。 + - `out_tensors`: 算子的所有输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#mstensor)列表。 #### set_output @@ -277,7 +277,7 @@ virtual void set_output(mindspore::MSTensor out_tensor, int index) { this->outpu - 参数 - - `out_tensor`: 算子的输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#mstensor)。 + - `out_tensor`: 算子的输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#mstensor)。 - `index`: 算子输出在所有输出中的下标,从0开始计数。 @@ -287,7 +287,7 @@ virtual void set_output(mindspore::MSTensor out_tensor, int index) { this->outpu virtual const std::vector &inputs() ``` -返回算子的所有输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#mstensor)列表。 +返回算子的所有输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#mstensor)列表。 - 返回值 @@ -299,7 +299,7 @@ virtual const std::vector &inputs() virtual const std::vector &outputs() ``` -返回算子的所有输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#mstensor)列表。 +返回算子的所有输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#mstensor)列表。 - 返回值 @@ -335,7 +335,7 @@ void set_name(const std::string &name) const lite::Context *context() const ``` -返回算子对应的[Context](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#context)。 +返回算子对应的[Context](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#context)。 - 返回值 diff --git a/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry.md b/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry.md index e0a710e6db..124f1fe9cf 100644 --- a/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry.md +++ b/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry.md @@ -1,32 +1,32 @@ # mindspore::registry -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry.md) ## 接口汇总 | 类名 | 描述 | | --- | --- | | [NodeParserRegistry](#nodeparserregistry) | 扩展Node解析的注册类。| -| [REG_NODE_PARSER](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_registry.html#reg-node-parser) | 注册扩展Node解析。| +| [REG_NODE_PARSER](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_registry.html#reg-node-parser) | 注册扩展Node解析。| | [ModelParserRegistry](#modelparserregistry) | 扩展Model解析的注册类。| -| [REG_MODEL_PARSER](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_registry.html#reg-model-parser) | 注册扩展Model解析。| +| [REG_MODEL_PARSER](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_registry.html#reg-model-parser) | 注册扩展Model解析。| | [PassBase](#passbase) | Pass的基类。| | [PassPosition](#passposition) | 扩展Pass的运行位置。| | [PassRegistry](#passregistry) | 扩展Pass注册构造类。| -| [REG_PASS](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_registry.html#reg-pass) | 注册扩展Pass。| -| [REG_SCHEDULED_PASS](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_registry.html#reg-scheduled-pass) | 注册扩展Pass的调度顺序。| +| [REG_PASS](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_registry.html#reg-pass) | 注册扩展Pass。| +| [REG_SCHEDULED_PASS](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_registry.html#reg-scheduled-pass) | 注册扩展Pass的调度顺序。| | [RegisterKernel](#registerkernel) | 算子注册实现类。| | [KernelReg](#kernelreg) | 算子注册构造类。| -| [REGISTER_KERNEL](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_registry.html#register-kernel) | 注册算子。| -| [REGISTER_CUSTOM_KERNEL](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_registry.html#register-custom-kernel) | Custom算子注册。| +| [REGISTER_KERNEL](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_registry.html#register-kernel) | 注册算子。| +| [REGISTER_CUSTOM_KERNEL](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_registry.html#register-custom-kernel) | Custom算子注册。| | [RegisterKernelInterface](#registerkernelinterface) | 算子扩展能力注册实现类。| | [KernelInterfaceReg](#kernelinterfacereg) | 算子扩展能力注册构造类。| -| [REGISTER_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_registry.html#register-kernel-interface) | 注册算子扩展能力。| -| [REGISTER_CUSTOM_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_registry.html#register-custom-kernel-interface) | 注册Custom算子扩展能力。| +| [REGISTER_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_registry.html#register-kernel-interface) | 注册算子扩展能力。| +| [REGISTER_CUSTOM_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_registry.html#register-custom-kernel-interface) | 注册Custom算子扩展能力。| ## NodeParserRegistry -\#include <[node_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/node_parser_registry.h)> +\#include <[node_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/node_parser_registry.h)> NodeParserRegistry类用于注册及获取NodeParser类型的共享智能指针。 @@ -41,11 +41,11 @@ NodeParserRegistry(converter::FmkType fmk_type, const std::string &node_type, - 参数 - - `fmk_type`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_converter.html#fmktype)说明。 + - `fmk_type`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_converter.html#fmktype)说明。 - `node_type`: 节点的类型。 - - `node_parser`: NodeParser类型的共享智能指针实例,具体见[NodeParserPtr](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_converter.html#nodeparserptr)说明。 + - `node_parser`: NodeParser类型的共享智能指针实例,具体见[NodeParserPtr](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_converter.html#nodeparserptr)说明。 ### ~NodeParserRegistry @@ -67,13 +67,13 @@ static converter::NodeParserPtr GetNodeParser(converter::FmkType fmk_type, const - 参数 - - `fmk_type`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_converter.html#fmktype)说明。 + - `fmk_type`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_converter.html#fmktype)说明。 - `node_type`: 节点的类型。 ## REG_NODE_PARSER -\#include <[node_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/node_parser_registry.h)> +\#include <[node_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/node_parser_registry.h)> ```c++ #define REG_NODE_PARSER(fmk_type, node_type, node_parser) @@ -83,25 +83,25 @@ static converter::NodeParserPtr GetNodeParser(converter::FmkType fmk_type, const - 参数 - - `fmk_type`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_converter.html#fmktype)说明。 + - `fmk_type`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_converter.html#fmktype)说明。 - `node_type`: 节点的类型。 - - `node_parser`: NodeParser类型的共享智能指针实例,具体见[NodeParserPtr](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_converter.html#nodeparserptr)说明。 + - `node_parser`: NodeParser类型的共享智能指针实例,具体见[NodeParserPtr](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_converter.html#nodeparserptr)说明。 ## ModelParserCreator -\#include <[model_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/model_parser_registry.h)> +\#include <[model_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/model_parser_registry.h)> ```c++ typedef converter::ModelParser *(*ModelParserCreator)() ``` -创建[ModelParser](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_converter.html#modelparser)的函数原型声明。 +创建[ModelParser](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_converter.html#modelparser)的函数原型声明。 ## ModelParserRegistry -\#include <[model_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/model_parser_registry.h)> +\#include <[model_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/model_parser_registry.h)> ModelParserRegistry类用于注册及获取ModelParserCreator类型的函数指针。 @@ -115,7 +115,7 @@ ModelParserRegistry(FmkType fmk, ModelParserCreator creator) - 参数 - - `fmk`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_converter.html#fmktype)说明。 + - `fmk`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_converter.html#fmktype)说明。 - `creator`: ModelParserCreator类型的函数指针,具体见[ModelParserCreator](#modelparsercreator)说明。 @@ -139,11 +139,11 @@ static ModelParser *GetModelParser(FmkType fmk) - 参数 - - `fmk`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_converter.html#fmktype)说明。 + - `fmk`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_converter.html#fmktype)说明。 ## REG_MODEL_PARSER -\#include <[model_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/model_parser_registry.h)> +\#include <[model_parser_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/model_parser_registry.h)> ```c++ #define REG_MODEL_PARSER(fmk, parserCreator) @@ -153,15 +153,15 @@ static ModelParser *GetModelParser(FmkType fmk) - 参数 - - `fmk`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_converter.html#fmktype)说明。 + - `fmk`: 框架类型,具体见[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_converter.html#fmktype)说明。 - `creator`: ModelParserCreator类型的函数指针,具体见[ModelParserCreator](#modelparsercreator)说明。 -> 用户自定义的ModelParser,框架类型必须满足设定支持的框架类型[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_converter.html#fmktype)。 +> 用户自定义的ModelParser,框架类型必须满足设定支持的框架类型[FmkType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_converter.html#fmktype)。 ## PassBase -\#include <[pass_base.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/pass_base.h)> +\#include <[pass_base.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/pass_base.h)> PassBase定义了图优化的基类,以供用户继承并自定义图优化算法。 @@ -201,7 +201,7 @@ virtual bool Execute(const api::FuncGraphPtr &func_graph) = 0; ## PassBasePtr -\#include <[pass_base.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/pass_base.h)> +\#include <[pass_base.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/pass_base.h)> PassBase类的共享智能指针类型。 @@ -211,7 +211,7 @@ using PassBasePtr = std::shared_ptr ## PassPosition -\#include <[pass_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/pass_registry.h)> +\#include <[pass_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/pass_registry.h)> **enum**类型变量,定义扩展Pass的运行位置。 @@ -224,7 +224,7 @@ enum PassPosition { ## PassRegistry -\#include <[pass_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/pass_registry.h)> +\#include <[pass_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/pass_registry.h)> PassRegistry类用于注册及获取Pass类实例。 @@ -290,7 +290,7 @@ static PassBasePtr GetPassFromStoreRoom(const std::string &pass_name) ## REG_PASS -\#include <[pass_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/pass_registry.h)> +\#include <[pass_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/pass_registry.h)> ```c++ #define REG_PASS(name, pass) @@ -306,7 +306,7 @@ static PassBasePtr GetPassFromStoreRoom(const std::string &pass_name) ## REG_SCHEDULED_PASS -\#include <[pass_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/pass_registry.h)> +\#include <[pass_registry.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/pass_registry.h)> ```c++ #define REG_SCHEDULED_PASS(position, names) @@ -322,7 +322,7 @@ static PassBasePtr GetPassFromStoreRoom(const std::string &pass_name) > MindSpore Lite开放了部分内置Pass,请见以下说明。用户可以在`names`参数中添加内置Pass的命名标识,以在指定运行处调用内置Pass。 > -> - `ConstFoldPass`: 将输入均是常量的节点进行离线计算,导出的模型将不含该节点。特别地,针对shape算子,在[inputShape](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.1/converter/converter_tool.html#参数说明)给定的情形下,也会触发预计算。 +> - `ConstFoldPass`: 将输入均是常量的节点进行离线计算,导出的模型将不含该节点。特别地,针对shape算子,在[inputShape](https://www.mindspore.cn/lite/docs/zh-CN/r2.7.2/converter/converter_tool.html#参数说明)给定的情形下,也会触发预计算。 > - `DumpGraph`: 导出当前状态下的模型。请确保当前模型为NHWC或者NCHW格式的模型,例如卷积算子等。 > - `ToNCHWFormat`: 将当前状态下的模型转换为NCHW的格式,例如,四维的图输入、卷积算子等。 > - `ToNHWCFormat`: 将当前状态下的模型转换为NHWC的格式,例如,四维的图输入、卷积算子等。 @@ -334,7 +334,7 @@ static PassBasePtr GetPassFromStoreRoom(const std::string &pass_name) ## KernelDesc -\#include <[registry/register_kernel.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/register_kernel.h)> +\#include <[registry/register_kernel.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/register_kernel.h)> **struct**类型结构体,定义扩展kernel的基本属性。 @@ -349,7 +349,7 @@ struct KernelDesc { ## RegisterKernel -\#include <[registry/register_kernel.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/register_kernel.h)> +\#include <[registry/register_kernel.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/register_kernel.h)> ### CreateKernel @@ -363,13 +363,13 @@ using CreateKernel = std::function( - 参数 - - `inputs`: 算子输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#mstensor)。 + - `inputs`: 算子输入[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#mstensor)。 - - `outputs`: 算子输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#mstensor)。 + - `outputs`: 算子输出[MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#mstensor)。 - `primitive`: 算子经由flatbuffers反序化为Primitive后的结果。 - - `ctx`: 算子的上下文[Context](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#context)。 + - `ctx`: 算子的上下文[Context](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#context)。 ### 公有成员函数 @@ -387,9 +387,9 @@ static Status RegKernel(const std::string &arch, const std::string &provider, Da - `provider`: 生产商名,由用户自定义。 - - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_datatype.html)。 + - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_datatype.html)。 - - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 + - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 - `creator`: 创建算子的函数指针,具体见[CreateKernel](#createkernel)的说明。 @@ -407,7 +407,7 @@ Custom算子注册。 - `provider`: 生产商名,由用户自定义。 - - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_datatype.html)。 + - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_datatype.html)。 - `type`: 算子类型,由用户自定义,确保唯一即可。 @@ -429,7 +429,7 @@ static CreateKernel GetCreator(const schema::Primitive *primitive, KernelDesc *d ## KernelReg -\#include <[registry/register_kernel.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/register_kernel.h)> +\#include <[registry/register_kernel.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/register_kernel.h)> ### ~KernelReg @@ -453,9 +453,9 @@ KernelReg(const std::string &arch, const std::string &provider, DataType data_ty - `provider`: 生产商名,由用户自定义。 - - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_datatype.html)。 + - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_datatype.html)。 - - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 + - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 - `creator`: 创建算子的函数指针,具体见[CreateKernel](#createkernel)的说明。 @@ -471,7 +471,7 @@ KernelReg(const std::string &arch, const std::string &provider, DataType data_ty - `provider`: 生产商名,由用户自定义。 - - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_datatype.html)。 + - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_datatype.html)。 - `op_type`: 算子类型,由用户自定义,确保唯一即可。 @@ -491,9 +491,9 @@ KernelReg(const std::string &arch, const std::string &provider, DataType data_ty - `provider`: 生产商名,由用户自定义。 - - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_datatype.html)。 + - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_datatype.html)。 - - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 + - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 - `creator`: 创建算子的函数指针,具体见[CreateKernel](#createkernel)的说明。 @@ -511,7 +511,7 @@ KernelReg(const std::string &arch, const std::string &provider, DataType data_ty - `provider`: 生产商名,由用户自定义。 - - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_datatype.html)。 + - `data_type`: 算子支持的数据类型,具体见[DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_datatype.html)。 - `op_type`: 算子类型,由用户自定义,确保唯一即可。 @@ -519,7 +519,7 @@ KernelReg(const std::string &arch, const std::string &provider, DataType data_ty ## KernelInterfaceCreator -\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/register_kernel_interface.h)> +\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/register_kernel_interface.h)> 定义创建算子的函数指针类型。 @@ -529,7 +529,7 @@ using KernelInterfaceCreator = std::function +\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/register_kernel_interface.h)> 算子扩展能力注册实现类。 @@ -563,7 +563,7 @@ static Status Reg(const std::string &provider, int op_type, const KernelInterfac - `provider`: 生产商,由用户自定义。 - - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 + - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 - `creator`: KernelInterface的创建函数,详细见[KernelInterfaceCreator](#kernelinterfacecreator)的说明。 @@ -585,7 +585,7 @@ static std::shared_ptr GetKernelInterface(const std::st ## KernelInterfaceReg -\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/register_kernel_interface.h)> +\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/register_kernel_interface.h)> 算子扩展能力注册构造类。 @@ -601,7 +601,7 @@ KernelInterfaceReg(const std::string &provider, int op_type, const KernelInterfa - `provider`: 生产商,由用户自定义。 - - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 + - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 - `creator`: KernelInterface的创建函数,详细见[KernelInterfaceCreator](#kernelinterfacecreator)的说明。 @@ -621,7 +621,7 @@ KernelInterfaceReg(const std::string &provider, const std::string &op_type, cons ## REGISTER_KERNEL_INTERFACE -\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/register_kernel_interface.h)> +\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/register_kernel_interface.h)> 注册KernelInterface的实现。 @@ -633,13 +633,13 @@ KernelInterfaceReg(const std::string &provider, const std::string &op_type, cons - `provider`: 生产商,由用户自定义。 - - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 + - `op_type`: 算子类型,定义在[ops.fbs](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/schema/ops.fbs)中,编绎时会生成到ops_generated.h,该文件可以在发布件中获取。 - `creator`: 创建KernelInterface的函数指针,具体见[KernelInterfaceCreator](#kernelinterfacecreator)的说明。 ## REGISTER_CUSTOM_KERNEL_INTERFACE -\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/register_kernel_interface.h)> +\#include <[registry/register_kernel_interface.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/register_kernel_interface.h)> 注册Custom算子对应的KernelInterface实现。 diff --git a/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry_opencl.md b/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry_opencl.md index 3ee73463bc..fe3360556b 100644 --- a/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry_opencl.md +++ b/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry_opencl.md @@ -1,6 +1,6 @@ # mindspore::registry::opencl -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry_opencl.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_cpp/mindspore_registry_opencl.md) ## 接口汇总 @@ -10,7 +10,7 @@ ## OpenCLRuntimeWrapper -\#include <[include/registry/opencl_runtime_wrapper.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/include/registry/opencl_runtime_wrapper.h)> +\#include <[include/registry/opencl_runtime_wrapper.h](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/include/registry/opencl_runtime_wrapper.h)> OpenCLRuntimeWrapper类包装了内部OpenCL的相关接口,用于支持南向GPU算子的开发。 @@ -134,7 +134,7 @@ Status SyncCommandQueue() std::shared_ptr GetAllocator() ``` -获取GPU内存分配器的智能指针。通过[Allocator接口](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html),可申请GPU内存,用于OpenCL内核的运算。 +获取GPU内存分配器的智能指针。通过[Allocator接口](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html),可申请GPU内存,用于OpenCL内核的运算。 #### MapBuffer diff --git a/docs/lite/api/source_zh_cn/api_java/ascend_device_info.md b/docs/lite/api/source_zh_cn/api_java/ascend_device_info.md index 9bfd68ace9..97fb5f6e72 100644 --- a/docs/lite/api/source_zh_cn/api_java/ascend_device_info.md +++ b/docs/lite/api/source_zh_cn/api_java/ascend_device_info.md @@ -1,6 +1,6 @@ # AscendDeviceInfo -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_java/ascend_device_info.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_java/ascend_device_info.md) ```java import com.mindspore.config.AscendDeviceInfo; diff --git a/docs/lite/api/source_zh_cn/api_java/class_list.md b/docs/lite/api/source_zh_cn/api_java/class_list.md index c490cf5ef0..1580533cb5 100644 --- a/docs/lite/api/source_zh_cn/api_java/class_list.md +++ b/docs/lite/api/source_zh_cn/api_java/class_list.md @@ -1,20 +1,20 @@ # 类列表 -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_java/class_list.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_java/class_list.md) | 包 | 类 | 描述 | 云侧推理是否支持 | 端侧推理是否支持 | | ------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |--------|--------| -| com.mindspore | [Model](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_java/model.html) | Model定义了MindSpore中的模型,用于计算图的编译和执行。 | √ | √ | -| com.mindspore.config | [MSContext](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_java/mscontext.html) | MSContext用于保存执行期间的上下文。 | √ | √ | -| com.mindspore | [MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_java/mstensor.html) | MSTensor定义了MindSpore中的张量。 | √ | √ | -| com.mindspore | [ModelParallelRunner](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_java/model_parallel_runner.html) | 定义了MindSpore Lite并发推理。 | √ | ✕ | -| com.mindspore.config | [RunnerConfig](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_java/runner_config.html) | RunnerConfig定义并发推理的配置参数。 | √ | ✕ | -| com.mindspore | [Graph](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_java/graph.html) | Graph定义了MindSpore中的计算图。 | ✕ | √ | -| com.mindspore.config | [CpuBindMode](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_java/mscontext.html#cpubindmode) | CpuBindMode定义了CPU绑定模式。 | √ | √ | -| com.mindspore.config | [DeviceType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_java/mscontext.html#devicetype) | DeviceType定义了后端设备类型。 | √ | √ | -| com.mindspore.config | [DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_java/mstensor.html#datatype) | DataType定义了所支持的数据类型。 | √ | √ | -| com.mindspore.config | [Version](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_java/version.html) | Version用于获取MindSpore的版本信息。 | √ | √ | -| com.mindspore.config | [ModelType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_java/model.html#modeltype) | ModelType定义了模型文件的类型。 | √ | √ | -| com.mindspore.config | [AscendDeviceInfo](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_java/ascend_device_info.html) | MindSpore Lite用于昇腾硬件推理的配置参数。 | √ | ✕ | -| com.mindspore.config | [TrainCfg](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_java/train_cfg.html) | 用于端上模型训练的配置参数。 | ✕ | √ | +| com.mindspore | [Model](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_java/model.html) | Model定义了MindSpore中的模型,用于计算图的编译和执行。 | √ | √ | +| com.mindspore.config | [MSContext](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_java/mscontext.html) | MSContext用于保存执行期间的上下文。 | √ | √ | +| com.mindspore | [MSTensor](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_java/mstensor.html) | MSTensor定义了MindSpore中的张量。 | √ | √ | +| com.mindspore | [ModelParallelRunner](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_java/model_parallel_runner.html) | 定义了MindSpore Lite并发推理。 | √ | ✕ | +| com.mindspore.config | [RunnerConfig](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_java/runner_config.html) | RunnerConfig定义并发推理的配置参数。 | √ | ✕ | +| com.mindspore | [Graph](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_java/graph.html) | Graph定义了MindSpore中的计算图。 | ✕ | √ | +| com.mindspore.config | [CpuBindMode](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_java/mscontext.html#cpubindmode) | CpuBindMode定义了CPU绑定模式。 | √ | √ | +| com.mindspore.config | [DeviceType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_java/mscontext.html#devicetype) | DeviceType定义了后端设备类型。 | √ | √ | +| com.mindspore.config | [DataType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_java/mstensor.html#datatype) | DataType定义了所支持的数据类型。 | √ | √ | +| com.mindspore.config | [Version](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_java/version.html) | Version用于获取MindSpore的版本信息。 | √ | √ | +| com.mindspore.config | [ModelType](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_java/model.html#modeltype) | ModelType定义了模型文件的类型。 | √ | √ | +| com.mindspore.config | [AscendDeviceInfo](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_java/ascend_device_info.html) | MindSpore Lite用于昇腾硬件推理的配置参数。 | √ | ✕ | +| com.mindspore.config | [TrainCfg](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_java/train_cfg.html) | 用于端上模型训练的配置参数。 | ✕ | √ | diff --git a/docs/lite/api/source_zh_cn/api_java/graph.md b/docs/lite/api/source_zh_cn/api_java/graph.md index 8671d37fb7..b7929eaa5f 100644 --- a/docs/lite/api/source_zh_cn/api_java/graph.md +++ b/docs/lite/api/source_zh_cn/api_java/graph.md @@ -1,6 +1,6 @@ # Graph -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_java/graph.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_java/graph.md) ```java import com.mindspore.Graph; diff --git a/docs/lite/api/source_zh_cn/api_java/lite_java_example.rst b/docs/lite/api/source_zh_cn/api_java/lite_java_example.rst index fe574a73fa..5f37deb67a 100644 --- a/docs/lite/api/source_zh_cn/api_java/lite_java_example.rst +++ b/docs/lite/api/source_zh_cn/api_java/lite_java_example.rst @@ -4,6 +4,6 @@ .. toctree:: :maxdepth: 1 - 极简Demo↗ - 基于Java接口的Android应用开发↗ - 高阶用法↗ \ No newline at end of file + 极简Demo↗ + 基于Java接口的Android应用开发↗ + 高阶用法↗ \ No newline at end of file diff --git a/docs/lite/api/source_zh_cn/api_java/model.md b/docs/lite/api/source_zh_cn/api_java/model.md index dc52ccacda..3e55fa495e 100644 --- a/docs/lite/api/source_zh_cn/api_java/model.md +++ b/docs/lite/api/source_zh_cn/api_java/model.md @@ -1,6 +1,6 @@ # Model -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_java/model.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_java/model.md) ```java import com.mindspore.Model; diff --git a/docs/lite/api/source_zh_cn/api_java/model_parallel_runner.md b/docs/lite/api/source_zh_cn/api_java/model_parallel_runner.md index 55ae5b5930..1062f01ac0 100644 --- a/docs/lite/api/source_zh_cn/api_java/model_parallel_runner.md +++ b/docs/lite/api/source_zh_cn/api_java/model_parallel_runner.md @@ -1,6 +1,6 @@ # ModelParallelRunner -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_java/model_parallel_runner.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_java/model_parallel_runner.md) ```java import com.mindspore.config.RunnerConfig; diff --git a/docs/lite/api/source_zh_cn/api_java/mscontext.md b/docs/lite/api/source_zh_cn/api_java/mscontext.md index cae3f0d3c0..405a29cd35 100644 --- a/docs/lite/api/source_zh_cn/api_java/mscontext.md +++ b/docs/lite/api/source_zh_cn/api_java/mscontext.md @@ -1,6 +1,6 @@ # MSContext -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_java/mscontext.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_java/mscontext.md) ```java import com.mindspore.config.MSContext; @@ -54,7 +54,7 @@ public boolean init(int threadNum, int cpuBindMode) - 参数 - `threadNum`: 线程数。 - - `cpuBindMode`: CPU绑定模式,`cpuBindMode`在[com.mindspore.config.CpuBindMode](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/java/src/main/java/com/mindspore/config/CpuBindMode.java)中定义。 + - `cpuBindMode`: CPU绑定模式,`cpuBindMode`在[com.mindspore.config.CpuBindMode](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/java/src/main/java/com/mindspore/config/CpuBindMode.java)中定义。 - 返回值 @@ -69,7 +69,7 @@ public boolean init(int threadNum, int cpuBindMode, boolean isEnableParallel) - 参数 - `threadNum`: 线程数。 - - `cpuBindMode`: CPU绑定模式,`cpuBindMode`在[com.mindspore.config.CpuBindMode](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/java/src/main/java/com/mindspore/config/CpuBindMode.java)中定义。 + - `cpuBindMode`: CPU绑定模式,`cpuBindMode`在[com.mindspore.config.CpuBindMode](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/java/src/main/java/com/mindspore/config/CpuBindMode.java)中定义。 - `isEnableParallel`: 是否开启异构并行。 - 返回值 @@ -86,7 +86,7 @@ public boolean addDeviceInfo(int deviceType, boolean isEnableFloat16) - 参数 - - `deviceType`: 设备类型,`deviceType`在[com.mindspore.config.DeviceType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/java/src/main/java/com/mindspore/config/DeviceType.java)中定义。 + - `deviceType`: 设备类型,`deviceType`在[com.mindspore.config.DeviceType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/java/src/main/java/com/mindspore/config/DeviceType.java)中定义。 - `isEnableFloat16`: 是否开启fp16。 - 返回值 @@ -101,7 +101,7 @@ public boolean addDeviceInfo(int deviceType, boolean isEnableFloat16, int npuFre - 参数 - - `deviceType`: 设备类型,`deviceType`在[com.mindspore.config.DeviceType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/java/src/main/java/com/mindspore/config/DeviceType.java)中定义。 + - `deviceType`: 设备类型,`deviceType`在[com.mindspore.config.DeviceType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/java/src/main/java/com/mindspore/config/DeviceType.java)中定义。 - `isEnableFloat16`: 是否开启fp16。 - `npuFreq`: NPU运行频率,仅当deviceType为npu才需要。 diff --git a/docs/lite/api/source_zh_cn/api_java/mstensor.md b/docs/lite/api/source_zh_cn/api_java/mstensor.md index 959c3e1ef3..d07d9826b6 100644 --- a/docs/lite/api/source_zh_cn/api_java/mstensor.md +++ b/docs/lite/api/source_zh_cn/api_java/mstensor.md @@ -1,6 +1,6 @@ # MSTensor -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_java/mstensor.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_java/mstensor.md) ```java import com.mindspore.MSTensor; @@ -86,7 +86,7 @@ public int[] getShape() public int getDataType() ``` -DataType在[com.mindspore.DataType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/java/src/main/java/com/mindspore/config/DataType.java)中定义。 +DataType在[com.mindspore.DataType](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/java/src/main/java/com/mindspore/config/DataType.java)中定义。 - 返回值 diff --git a/docs/lite/api/source_zh_cn/api_java/runner_config.md b/docs/lite/api/source_zh_cn/api_java/runner_config.md index 3033b83cba..f0c92b6cb3 100644 --- a/docs/lite/api/source_zh_cn/api_java/runner_config.md +++ b/docs/lite/api/source_zh_cn/api_java/runner_config.md @@ -1,6 +1,6 @@ # RunnerConfig -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_java/runner_config.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_java/runner_config.md) RunnerConfig定义了MindSpore Lite并发推理的配置参数。 diff --git a/docs/lite/api/source_zh_cn/api_java/train_cfg.md b/docs/lite/api/source_zh_cn/api_java/train_cfg.md index e9d1540c47..95407ba090 100644 --- a/docs/lite/api/source_zh_cn/api_java/train_cfg.md +++ b/docs/lite/api/source_zh_cn/api_java/train_cfg.md @@ -1,6 +1,6 @@ # TrainCfg -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_java/train_cfg.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_java/train_cfg.md) ```java import com.mindspore.config.TrainCfg; diff --git a/docs/lite/api/source_zh_cn/api_java/version.md b/docs/lite/api/source_zh_cn/api_java/version.md index 267d78a39b..4881d380f1 100644 --- a/docs/lite/api/source_zh_cn/api_java/version.md +++ b/docs/lite/api/source_zh_cn/api_java/version.md @@ -1,6 +1,6 @@ # Version -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/api/source_zh_cn/api_java/version.md) +[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/api/source_zh_cn/api_java/version.md) ```java import com.mindspore.config.Version; diff --git a/docs/lite/api/source_zh_cn/index.rst b/docs/lite/api/source_zh_cn/index.rst index aea0c9d775..b278970aeb 100644 --- a/docs/lite/api/source_zh_cn/index.rst +++ b/docs/lite/api/source_zh_cn/index.rst @@ -12,21 +12,21 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 类名 | 接口说明 | C++ 接口 | Python 接口 | +=====================+=========================================================================================================+==========================================================================================================================================================================================================================+============================================================================================================================================================================================================================================================================================================================================================================+ -| Context | 设置运行时的线程数 | void SetThreadNum(int32_t thread_num) | `Context.cpu.thread_num `__ | +| Context | 设置运行时的线程数 | void SetThreadNum(int32_t thread_num) | `Context.cpu.thread_num `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | 获取当前线程数设置 | int32_t GetThreadNum() const | `Context.cpu.thread_num `__ | +| Context | 获取当前线程数设置 | int32_t GetThreadNum() const | `Context.cpu.thread_num `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | 设置运行时的算子并行推理数目 | void SetInterOpParallelNum(int32_t parallel_num) | `Context.cpu.inter_op_parallel_num `__ | +| Context | 设置运行时的算子并行推理数目 | void SetInterOpParallelNum(int32_t parallel_num) | `Context.cpu.inter_op_parallel_num `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | 获取当前算子并行数设置 | int32_t GetInterOpParallelNum() const | `Context.cpu.inter_op_parallel_num `__ | +| Context | 获取当前算子并行数设置 | int32_t GetInterOpParallelNum() const | `Context.cpu.inter_op_parallel_num `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | 设置运行时的CPU绑核策略 | void SetThreadAffinity(int mode) | `Context.cpu.thread_affinity_mode `__ | +| Context | 设置运行时的CPU绑核策略 | void SetThreadAffinity(int mode) | `Context.cpu.thread_affinity_mode `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | 获取当前CPU绑核策略 | int GetThreadAffinityMode() const | `Context.cpu.thread_affinity_mode `__ | +| Context | 获取当前CPU绑核策略 | int GetThreadAffinityMode() const | `Context.cpu.thread_affinity_mode `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | 设置运行时的CPU绑核列表 | void SetThreadAffinity(const std::vector &core_list) | `Context.cpu.thread_affinity_core_list `__ | +| Context | 设置运行时的CPU绑核列表 | void SetThreadAffinity(const std::vector &core_list) | `Context.cpu.thread_affinity_core_list `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | 获取当前CPU绑核列表 | std::vector GetThreadAffinityCoreList() const | `Context.cpu.thread_affinity_core_list `__ | +| Context | 获取当前CPU绑核列表 | std::vector GetThreadAffinityCoreList() const | `Context.cpu.thread_affinity_core_list `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 设置运行时是否支持并行 | void SetEnableParallel(bool is_parallel) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -44,7 +44,7 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Context | 获取当前配置中,量化模型的运行模式 | bool GetMultiModalHW() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Context | 修改该context下的DeviceInfoContext数组 | std::vector> &MutableDeviceInfo() | 封装在 `Context.target `__ | +| Context | 修改该context下的DeviceInfoContext数组 | std::vector> &MutableDeviceInfo() | 封装在 `Context.target `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | DeviceInfoContext | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -62,29 +62,29 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | DeviceInfoContext | 获取内存管理器 | std::shared_ptr GetAllocator() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| CPUDeviceInfo | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | `context.cpu `__ | +| CPUDeviceInfo | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | `context.cpu `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| CPUDeviceInfo | 设置是否以FP16精度进行推理 | void SetEnableFP16(bool is_fp16) | `Context.cpu.precision_mode `__ | +| CPUDeviceInfo | 设置是否以FP16精度进行推理 | void SetEnableFP16(bool is_fp16) | `Context.cpu.precision_mode `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| CPUDeviceInfo | 获取当前是否以FP16精度进行推理 | bool GetEnableFP16() const | `Context.cpu.precision_mode `__ | +| CPUDeviceInfo | 获取当前是否以FP16精度进行推理 | bool GetEnableFP16() const | `Context.cpu.precision_mode `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | `Context.gpu `__ | +| GPUDeviceInfo | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | `Context.gpu `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | 设置设备ID | void SetDeviceID(uint32_t device_id) | `Context.gpu.device_id `__ | +| GPUDeviceInfo | 设置设备ID | void SetDeviceID(uint32_t device_id) | `Context.gpu.device_id `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | 获取设备ID | uint32_t GetDeviceID() const | `Context.gpu.device_id `__ | +| GPUDeviceInfo | 获取设备ID | uint32_t GetDeviceID() const | `Context.gpu.device_id `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | 获取当前运行的RANK ID | int GetRankID() const | `Context.gpu.rank_id `__ | +| GPUDeviceInfo | 获取当前运行的RANK ID | int GetRankID() const | `Context.gpu.rank_id `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | 获取当前运行的GROUP SIZE | int GetGroupSize() const | `Context.gpu.group_size `__ | +| GPUDeviceInfo | 获取当前运行的GROUP SIZE | int GetGroupSize() const | `Context.gpu.group_size `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 设置推理时算子精度 | void SetPrecisionMode(const std::string &precision_mode) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 获取推理时算子精度 | std::string GetPrecisionMode() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | 设置是否以FP16精度进行推理 | void SetEnableFP16(bool is_fp16) | `Context.gpu.precision_mode `__ | +| GPUDeviceInfo | 设置是否以FP16精度进行推理 | void SetEnableFP16(bool is_fp16) | `Context.gpu.precision_mode `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| GPUDeviceInfo | 获取是否以FP16精度进行推理 | bool GetEnableFP16() const | `Context.gpu.precision_mode `__ | +| GPUDeviceInfo | 获取是否以FP16精度进行推理 | bool GetEnableFP16() const | `Context.gpu.precision_mode `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 设置是否绑定OpenGL纹理数据 | void SetEnableGLTexture(bool is_enable_gl_texture) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -98,11 +98,11 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | GPUDeviceInfo | 获取当前OpenGL EGLDisplay | void \*GetGLDisplay() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | `Context.ascend `__ | +| AscendDeviceInfo | 获取该DeviceInfoContext的类型 | enum DeviceType GetDeviceType() const | `Context.ascend `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | 设置设备ID | void SetDeviceID(uint32_t device_id) | `Context.ascend.device_id `__ | +| AscendDeviceInfo | 设置设备ID | void SetDeviceID(uint32_t device_id) | `Context.ascend.device_id `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | 获取设备ID | uint32_t GetDeviceID() const | `Context.ascend.device_id `__ | +| AscendDeviceInfo | 获取设备ID | uint32_t GetDeviceID() const | `Context.ascend.device_id `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 设置AIPP配置文件路径 | void SetInsertOpConfigPath(const std::string &cfg_path) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -132,9 +132,9 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 获取模型输出type | enum DataType GetOutputType() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | 设置模型精度模式 | void SetPrecisionMode(const std::string &precision_mode) | `Context.ascend.precision_mode `__ | +| AscendDeviceInfo | 设置模型精度模式 | void SetPrecisionMode(const std::string &precision_mode) | `Context.ascend.precision_mode `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| AscendDeviceInfo | 获取模型精度模式 | std::string GetPrecisionMode() const | `Context.ascend.precision_mode `__ | +| AscendDeviceInfo | 获取模型精度模式 | std::string GetPrecisionMode() const | `Context.ascend.precision_mode `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | AscendDeviceInfo | 设置算子实现方式 | void SetOpSelectImplMode(const std::string &op_select_impl_mode) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -160,7 +160,7 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 从内存缓冲区加载模型,并将模型编译至可在Device上运行的状态 | Status Build(const void \*model_data, size_t data_size, ModelType model_type, const std::shared_ptr &model_context = nullptr) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | 从内存缓冲区加载模型,并将模型编译至可在Device上运行的状态 | Status Build(const std::string &model_path, ModelType model_type, const std::shared_ptr &model_context = nullptr) | `Model.build_from_file `__ | +| Model | 从内存缓冲区加载模型,并将模型编译至可在Device上运行的状态 | Status Build(const std::string &model_path, ModelType model_type, const std::shared_ptr &model_context = nullptr) | `Model.build_from_file `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 根据路径读取加载模型,并将模型编译至可在Device上运行的状态 | Status Build(const void \*model_data, size_t data_size, ModelType model_type, const std::shared_ptr &model_context, const Key &dec_key, const std::string &dec_mode, const std::string &crypto_lib_path) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -172,11 +172,11 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 构建一个迁移学习模型,其中主干权重是固定的,头部权重是可训练的 | Status BuildTransferLearning(GraphCell backbone, GraphCell head, const std::shared_ptr &context, const std::shared_ptr &train_cfg = nullptr) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | 调整已编译模型的输入张量形状 | Status Resize(const std::vector &inputs, const std::vector > &dims) | `Model.resize `__ | +| Model | 调整已编译模型的输入张量形状 | Status Resize(const std::vector &inputs, const std::vector > &dims) | `Model.resize `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 更新模型的权重Tensor的大小和内容 | Status UpdateWeights(const std::vector &new_weights) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | 推理模型 | Status Predict(const std::vector &inputs, std::vector \*outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.predict `__ | +| Model | 推理模型 | Status Predict(const std::vector &inputs, std::vector \*outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.predict `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 仅带callback的推理模型 | Status Predict(const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -188,11 +188,11 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 检查模型是否配置了数据预处理 | bool HasPreprocess() | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | 根据路径读取配置文件 | Status LoadConfig(const std::string &config_path) | 封装在 `Model.build_from_file `__ 方法的 `config_path` 参数中 | +| Model | 根据路径读取配置文件 | Status LoadConfig(const std::string &config_path) | 封装在 `Model.build_from_file `__ 方法的 `config_path` 参数中 | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 刷新配置 | Status UpdateConfig(const std::string §ion, const std::pair &config) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | 获取模型所有输入张量 | std::vector GetInputs() | `Model.get_inputs `__ | +| Model | 获取模型所有输入张量 | std::vector GetInputs() | `Model.get_inputs `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取模型指定名字的输入张量 | MSTensor GetInputByTensorName(const std::string &tensor_name) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -220,7 +220,7 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取训练指标参数 | std::vector GetMetrics() | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Model | 获取模型所有输出张量 | std::vector GetOutputs() | 封装在 `Model.predict `__ 的返回值 | +| Model | 获取模型所有输出张量 | std::vector GetOutputs() | 封装在 `Model.predict `__ 的返回值 | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 获取模型所有输出张量的名字 | std::vector GetOutputTensorNames() | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -240,33 +240,33 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Model | 检查设备是否支持该模型 | static bool CheckModelSupport(enum DeviceType device_type, ModelType model_type) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | 设置RunnerConfig的worker的个数 | void SetWorkersNum(int32_t workers_num) | `Context.parallel.workers_num `__ | +| RunnerConfig | 设置RunnerConfig的worker的个数 | void SetWorkersNum(int32_t workers_num) | `Context.parallel.workers_num `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | 获取RunnerConfig的worker的个数 | int32_t GetWorkersNum() const | `Context.parallel.workers_num `__ | +| RunnerConfig | 获取RunnerConfig的worker的个数 | int32_t GetWorkersNum() const | `Context.parallel.workers_num `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | 设置RunnerConfig的context参数 | void SetContext(const std::shared_ptr &context) | 封装在 `Context.parallel `__ | +| RunnerConfig | 设置RunnerConfig的context参数 | void SetContext(const std::shared_ptr &context) | 封装在 `Context.parallel `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | 获取RunnerConfig配置的上下文参数 | std::shared_ptr GetContext() const | 封装在 `Context.parallel `__ | +| RunnerConfig | 获取RunnerConfig配置的上下文参数 | std::shared_ptr GetContext() const | 封装在 `Context.parallel `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | 设置RunnerConfig的配置参数 | void SetConfigInfo(const std::string §ion, const std::map &config) | `Context.parallel.config_info `__ | +| RunnerConfig | 设置RunnerConfig的配置参数 | void SetConfigInfo(const std::string §ion, const std::map &config) | `Context.parallel.config_info `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | 获取RunnerConfig配置参数信息 | std::map> GetConfigInfo() const | `Context.parallel.config_info `__ | +| RunnerConfig | 获取RunnerConfig配置参数信息 | std::map> GetConfigInfo() const | `Context.parallel.config_info `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | 设置RunnerConfig中的配置文件路径 | void SetConfigPath(const std::string &config_path) | `Context.parallel.config_path `__ | +| RunnerConfig | 设置RunnerConfig中的配置文件路径 | void SetConfigPath(const std::string &config_path) | `Context.parallel.config_path `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| RunnerConfig | 获取RunnerConfig中的配置文件的路径 | std::string GetConfigPath() const | `Context.parallel.config_path `__ | +| RunnerConfig | 获取RunnerConfig中的配置文件的路径 | std::string GetConfigPath() const | `Context.parallel.config_path `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelParallelRunner | 根据路径读取加载模型,生成一个或者多个模型,并将所有模型编译至可在Device上运行的状态 | Status Init(const std::string &model_path, const std::shared_ptr &runner_config = nullptr) | `Model.parallel_runner.build_from_file `__ | +| ModelParallelRunner | 根据路径读取加载模型,生成一个或者多个模型,并将所有模型编译至可在Device上运行的状态 | Status Init(const std::string &model_path, const std::shared_ptr &runner_config = nullptr) | `Model.parallel_runner.build_from_file `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelParallelRunner | 根据模型文件数据,生成一个或者多个模型,并将所有模型编译至可在Device上运行的状态 | Status Init(const void \*model_data, const size_t data_size, const std::shared_ptr &runner_config = nullptr) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelParallelRunner | 获取模型所有输入张量 | std::vector GetInputs() | `Model.parallel_runner.get_inputs `__ | +| ModelParallelRunner | 获取模型所有输入张量 | std::vector GetInputs() | `Model.parallel_runner.get_inputs `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelParallelRunner | 获取模型所有输出张量 | std::vector GetOutputs() | 封装在 `Model.parallel_runner.predict `__ 的返回值 | +| ModelParallelRunner | 获取模型所有输出张量 | std::vector GetOutputs() | 封装在 `Model.parallel_runner.predict `__ 的返回值 | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelParallelRunner | 并发推理模型 | Status Predict(const std::vector &inputs, std::vector \*outputs,const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.parallel_runner.predict `__ | +| ModelParallelRunner | 并发推理模型 | Status Predict(const std::vector &inputs, std::vector \*outputs,const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr) | `Model.parallel_runner.predict `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 创建一个MSTensor对象,其数据需复制后才能由Model访问 | static inline MSTensor \*CreateTensor(const std::string &name, DataType type, const std::vector &shape, const void \*data, size_t data_len) noexcept | `Tensor `__ | +| MSTensor | 创建一个MSTensor对象,其数据需复制后才能由Model访问 | static inline MSTensor \*CreateTensor(const std::string &name, DataType type, const std::vector &shape, const void \*data, size_t data_len) noexcept | `Tensor `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 创建一个MSTensor对象,其数据可以直接由Model访问 | static inline MSTensor \*CreateRefTensor(const std::string &name, DataType type, const std::vector &shape, const void \*data, size_t data_len, bool own_data = true) noexcept | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -280,19 +280,19 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 销毁一个由 `Clone` 、 `StringsToTensor` 、 `CreateRefTensor` 或 `CreateTensor` 所创建的对象 | static void DestroyTensorPtr(MSTensor \*tensor) noexcept | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 获取MSTensor的名字 | std::string Name() const | `Tensor.name `__ | +| MSTensor | 获取MSTensor的名字 | std::string Name() const | `Tensor.name `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 获取MSTensor的数据类型 | enum DataType DataType() const | `Tensor.dtype `__ | +| MSTensor | 获取MSTensor的数据类型 | enum DataType DataType() const | `Tensor.dtype `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 获取MSTensor的Shape | const std::vector &Shape() const | `Tensor.shape `__ | +| MSTensor | 获取MSTensor的Shape | const std::vector &Shape() const | `Tensor.shape `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 获取MSTensor的元素个数 | int64_t ElementNum() const | `Tensor.element_num `__ | +| MSTensor | 获取MSTensor的元素个数 | int64_t ElementNum() const | `Tensor.element_num `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 获取指向MSTensor中的数据拷贝的智能指针 | std::shared_ptr Data() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 获取MSTensor中的数据的指针 | void \*MutableData() | 封装在 `Tensor.get_data_to_numpy `__ 和 `Tensor.set_data_from_numpy `__ | +| MSTensor | 获取MSTensor中的数据的指针 | void \*MutableData() | 封装在 `Tensor.get_data_to_numpy `__ 和 `Tensor.set_data_from_numpy `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 获取MSTensor中的数据的以字节为单位的内存长度 | size_t DataSize() const | `Tensor.data_size `__ | +| MSTensor | 获取MSTensor中的数据的以字节为单位的内存长度 | size_t DataSize() const | `Tensor.data_size `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 判断MSTensor中的数据是否是常量数据 | bool IsConst() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -308,19 +308,19 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 判断MSTensor是否与另一个MSTensor不相等 | bool operator!=(const MSTensor &tensor) const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 设置MSTensor的Shape | void SetShape(const std::vector &shape) | `Tensor.shape `__ | +| MSTensor | 设置MSTensor的Shape | void SetShape(const std::vector &shape) | `Tensor.shape `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 设置MSTensor的DataType | void SetDataType(enum DataType data_type) | `Tensor.dtype `__ | +| MSTensor | 设置MSTensor的DataType | void SetDataType(enum DataType data_type) | `Tensor.dtype `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 设置MSTensor的名字 | void SetTensorName(const std::string &name) | `Tensor.name `__ | +| MSTensor | 设置MSTensor的名字 | void SetTensorName(const std::string &name) | `Tensor.name `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 设置MSTensor数据所属的内存池 | void SetAllocator(std::shared_ptr allocator) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 获取MSTensor数据所属的内存池 | std::shared_ptr allocator() const | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 设置MSTensor数据的format | void SetFormat(mindspore::Format format) | `Tensor.format `__ | +| MSTensor | 设置MSTensor数据的format | void SetFormat(mindspore::Format format) | `Tensor.format `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| MSTensor | 获取MSTensor数据的format | mindspore::Format format() const | `Tensor.format `__ | +| MSTensor | 获取MSTensor数据的format | mindspore::Format format() const | `Tensor.format `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 设置指向MSTensor数据的指针 | void SetData(void \*data, bool own_data = true) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -332,15 +332,15 @@ MindSpore Lite API 支持情况汇总 +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | MSTensor | 设置MSTensor的量化参数 | void SetQuantParams(std::vector quant_params) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelGroup | 构造ModelGroup对象,指示共享工作空间内存或共享权重内存,默认共享工作空间内存 | ModelGroup(ModelGroupFlag flags = ModelGroupFlag::kShareWorkspace) | `ModelGroup `__ | +| ModelGroup | 构造ModelGroup对象,指示共享工作空间内存或共享权重内存,默认共享工作空间内存 | ModelGroup(ModelGroupFlag flags = ModelGroupFlag::kShareWorkspace) | `ModelGroup `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelGroup | 共享权重内存时,添加需要共享权重内存的模型对象 | Status AddModel(const std::vector &model_list) | `ModelGroup.add_model `__ | +| ModelGroup | 共享权重内存时,添加需要共享权重内存的模型对象 | Status AddModel(const std::vector &model_list) | `ModelGroup.add_model `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelGroup | 共享工作空间内存时,添加需要共享工作空间内存的模型路径 | Status AddModel(const std::vector &model_path_list) | `ModelGroup.add_model `__ | +| ModelGroup | 共享工作空间内存时,添加需要共享工作空间内存的模型路径 | Status AddModel(const std::vector &model_path_list) | `ModelGroup.add_model `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ModelGroup | 共享工作空间内存时,添加需要共享工作空间内存的模型缓存 | Status AddModel(const std::vector> &model_buff_list) | | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| ModelGroup | 共享工作空间内存时,计算最大的工作空间内存大小 | Status CalMaxSizeOfWorkspace(ModelType model_type, const std::shared_ptr &ms_context) | `ModelGroup.cal_max_size_of_workspace `__ | +| ModelGroup | 共享工作空间内存时,计算最大的工作空间内存大小 | Status CalMaxSizeOfWorkspace(ModelType model_type, const std::shared_ptr &ms_context) | `ModelGroup.cal_max_size_of_workspace `__ | +---------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/docs/lite/docs/source_en/advanced/image_processing.md b/docs/lite/docs/source_en/advanced/image_processing.md index 2b7abf9c87..7daf59f71e 100644 --- a/docs/lite/docs/source_en/advanced/image_processing.md +++ b/docs/lite/docs/source_en/advanced/image_processing.md @@ -1,6 +1,6 @@ # Data Preprocessing -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/docs/source_en/advanced/image_processing.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/docs/source_en/advanced/image_processing.md) ## Overview @@ -15,7 +15,7 @@ The main purpose of image preprocessing is to eliminate irrelevant information i ## Initializing the Image -Here, the [InitFromPixel](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/function_mindspore_dataset_InitFromPixel-1.html) function in the `image_process.h` file is used to initialize the image. +Here, the [InitFromPixel](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/function_mindspore_dataset_InitFromPixel-1.html) function in the `image_process.h` file is used to initialize the image. ```cpp bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType data_type, int w, int h, LiteMat &m) @@ -38,7 +38,7 @@ The image processing operations here can be used in any combination according to ### Resizing Image -Here we use the [ResizeBilinear](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/function_mindspore_dataset_ResizeBilinear-1.html) function in `image_process.h` to resize the image through a bilinear algorithm. Currently, the supported data type is uint8, and the supported channels are 3 and 1. +Here we use the [ResizeBilinear](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/function_mindspore_dataset_ResizeBilinear-1.html) function in `image_process.h` to resize the image through a bilinear algorithm. Currently, the supported data type is uint8, and the supported channels are 3 and 1. ```cpp bool ResizeBilinear(const LiteMat &src, LiteMat &dst, int dst_w, int dst_h) @@ -60,7 +60,7 @@ ResizeBilinear(lite_mat_bgr, lite_mat_resize, 256, 256); ### Converting the Image Data Type -Here we use the [ConvertTo](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/function_mindspore_dataset_ConvertTo-1.html) function in `image_process.h` to convert the image data type. Currently, the conversion from uint8 to float is supported. +Here we use the [ConvertTo](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/function_mindspore_dataset_ConvertTo-1.html) function in `image_process.h` to convert the image data type. Currently, the conversion from uint8 to float is supported. ```cpp bool ConvertTo(const LiteMat &src, LiteMat &dst, double scale = 1.0) @@ -82,7 +82,7 @@ ConvertTo(lite_mat_bgr, lite_mat_convert_float); ### Cropping Image Data -Here we use the [Crop](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/function_mindspore_dataset_Crop-1.html) function in `image_process.h` to crop the image. Currently, channels 3 and 1 are supported. +Here we use the [Crop](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/function_mindspore_dataset_Crop-1.html) function in `image_process.h` to crop the image. Currently, channels 3 and 1 are supported. ```cpp bool Crop(const LiteMat &src, LiteMat &dst, int x, int y, int w, int h) @@ -104,7 +104,7 @@ Crop(lite_mat_bgr, lite_mat_cut, 16, 16, 224, 224); ### Normalizing Image Data -In order to eliminate the dimensional influence among the data indicators and solve the comparability problem among the data indicators through standardization processing is adopted, here is the use of the [SubStractMeanNormalize](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/function_mindspore_dataset_SubStractMeanNormalize-1.html) function in `image_process.h` to normalize the image data. +In order to eliminate the dimensional influence among the data indicators and solve the comparability problem among the data indicators through standardization processing is adopted, here is the use of the [SubStractMeanNormalize](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/function_mindspore_dataset_SubStractMeanNormalize-1.html) function in `image_process.h` to normalize the image data. ```cpp bool SubStractMeanNormalize(const LiteMat &src, LiteMat &dst, const std::vector &mean, const std::vector &std) diff --git a/docs/lite/docs/source_en/advanced/micro.md b/docs/lite/docs/source_en/advanced/micro.md index 2082a800af..865c5642c6 100644 --- a/docs/lite/docs/source_en/advanced/micro.md +++ b/docs/lite/docs/source_en/advanced/micro.md @@ -1,6 +1,6 @@ # Performing Inference or Training on MCU or Small Systems -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/docs/source_en/advanced/micro.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/docs/source_en/advanced/micro.md) ## Overview @@ -18,7 +18,7 @@ Deploying a model for inference or training via the Micro involves the following ### Overview The Micro configuration item in the parameter configuration file is configured via the MindSpore Lite conversion tool `converter_lite`. -This chapter describes the functions related to code generation in the conversion tool. For details about how to use the conversion tool, see [Converting Models for Inference](https://www.mindspore.cn/lite/docs/en/r2.7.1/converter/converter_tool.html). +This chapter describes the functions related to code generation in the conversion tool. For details about how to use the conversion tool, see [Converting Models for Inference](https://www.mindspore.cn/lite/docs/en/r2.7.2/converter/converter_tool.html). ### Preparing Environment @@ -32,11 +32,11 @@ The following describes how to prepare the environment for using the conversion You can obtain the conversion tool in either of the following ways: - - Download [Release Version](https://www.mindspore.cn/lite/docs/en/r2.7.1/use/downloads.html) from the MindSpore Lite official website. + - Download [Release Version](https://www.mindspore.cn/lite/docs/en/r2.7.2/use/downloads.html) from the MindSpore Lite official website. Download the release package whose OS is Linux-x86_64 and hardware platform is CPU. - - Start from the source code for [Building MindSpore Lite](https://www.mindspore.cn/lite/docs/en/r2.7.1/build/build.html). + - Start from the source code for [Building MindSpore Lite](https://www.mindspore.cn/lite/docs/en/r2.7.2/build/build.html). 3. Decompress the downloaded package. @@ -103,7 +103,7 @@ The following describes how to prepare the environment for using the conversion CONVERT RESULT SUCCESS:0 ``` - For details about the parameters related to converter_lite, see [Converter Parameter Description](https://www.mindspore.cn/lite/docs/en/r2.7.1/converter/converter_tool.html#parameter-description). + For details about the parameters related to converter_lite, see [Converter Parameter Description](https://www.mindspore.cn/lite/docs/en/r2.7.2/converter/converter_tool.html#parameter-description). After the conversion tool is successfully executed, the generated code is saved in the specified `outputFile` directory. In this example, the mnist folder is in the current conversion directory. The content is as follows: @@ -228,7 +228,7 @@ Table 1: micro_param Parameter Definition CONVERT RESULT SUCCESS:0 ``` - For details about the parameters related to converter_lite, see [Converter Parameter Description](https://www.mindspore.cn/lite/docs/en/r2.7.1/converter/converter_tool.html#parameter-description). + For details about the parameters related to converter_lite, see [Converter Parameter Description](https://www.mindspore.cn/lite/docs/en/r2.7.2/converter/converter_tool.html#parameter-description). After the conversion tool is successfully executed, the generated code is saved in the specified `save_path` + `project_name` directory. In this example, the mnist folder is in the current conversion directory. The content is as follows: @@ -277,7 +277,7 @@ Table 1: micro_param Parameter Definition Usually, when generating code, you can reduce the probability of errors in the deployment process by configuring the model input shape as the input shape for actual inference. When the model contains a `Shape` operator or the original model has a non-fixed input shape value, the input shape value of the model must be configured to support the relevant shape optimization and code generation. -The `--inputShape=` command of the conversion tool can be used to configure the input shape of the generated code. For specific parameter meanings, please refer to [Conversion Tool Instructions](https://www.mindspore.cn/lite/docs/en/r2.7.1/converter/converter_tool.html). +The `--inputShape=` command of the conversion tool can be used to configure the input shape of the generated code. For specific parameter meanings, please refer to [Conversion Tool Instructions](https://www.mindspore.cn/lite/docs/en/r2.7.2/converter/converter_tool.html). ### (Optional) Dynamic Shape Configuration @@ -325,7 +325,7 @@ support_parallel=true #### Involved Calling Interfaces By integrating the code and calling the following interfaces, the user can configure the multi-threaded inference of the model. -For specific interface parameters, refer to [API Document](https://www.mindspore.cn/lite/api/en/r2.7.1/index.html). +For specific interface parameters, refer to [API Document](https://www.mindspore.cn/lite/api/en/r2.7.2/index.html). Table 2: API Interface for Multi-threaded Configuration @@ -349,12 +349,12 @@ At present, this function is only enabled when the `target` is configured as x86 In MCU scenarios such as Cortex-M, limited by the memory size and computing power of the device, Int8 quantization operators are usually used for deployment inference to reduce the runtime memory size and speed up operations. -If the user already has an Int8 full quantitative model, you can refer to the section on [Generating Inference Code by Running converter_lite](https://www.mindspore.cn/lite/docs/en/r2.7.1/advanced/micro.html#generating-inference-code-by-running-converter-lite) to try to generate Int8 quantitative inference code directly without reading this chapter. +If the user already has an Int8 full quantitative model, you can refer to the section on [Generating Inference Code by Running converter_lite](https://www.mindspore.cn/lite/docs/en/r2.7.2/advanced/micro.html#generating-inference-code-by-running-converter-lite) to try to generate Int8 quantitative inference code directly without reading this chapter. In general, the user has only one trained float32 model. To generate Int8 quantitative inference code at this time, it is necessary to cooperate with the post quantization function of the conversion tool to generate code. See the following for specific steps. #### Configuration -Int8 quantization inference code can be generated by configuring quantization control parameters in the configuration file. For the description of quantization control parameters (universal quantization parameter `common_quant_param` and full quantization parameter `full_quant_param`), please refer to the [Quantization](https://www.mindspore.cn/lite/docs/en/r2.7.1/advanced/quantization.html). +Int8 quantization inference code can be generated by configuring quantization control parameters in the configuration file. For the description of quantization control parameters (universal quantization parameter `common_quant_param` and full quantization parameter `full_quant_param`), please refer to the [Quantization](https://www.mindspore.cn/lite/docs/en/r2.7.2/advanced/quantization.html). An example of Int8 quantitative inference code generation configuration file for a `Cortex-M` platform is as follows: @@ -411,7 +411,7 @@ target_device=DSP ### Overview The training code can be generated for the input model by using the MindSpore Lite conversion tool `converter_lite` and configuring the Micro configuration item in the parameter configuration file of the conversion tool. -This chapter describes the functions related to code generation in the conversion tool. For details about how to use the conversion tool, see [Converting Models for Training](https://www.mindspore.cn/lite/docs/en/r2.7.1/train/converter_train.html). +This chapter describes the functions related to code generation in the conversion tool. For details about how to use the conversion tool, see [Converting Models for Training](https://www.mindspore.cn/lite/docs/en/r2.7.2/train/converter_train.html). ### Preparing Environment @@ -491,7 +491,7 @@ For preparing environment section, refer to the [above](#preparing-environment), After generating model inference code, you need to obtain the `Micro` lib on which the generated inference code depends before performing integrated development on the code. The inference code of different platforms depends on the `Micro` lib of the corresponding platform. You need to specify the platform via the micro configuration item `target` based on the platform in use when generating code, and obtain the `Micro` lib of the platform when obtaining the inference package. -You can download the [Release Version](https://www.mindspore.cn/lite/docs/en/r2.7.1/use/downloads.html) of the corresponding platform from the MindSpore Lite official website. +You can download the [Release Version](https://www.mindspore.cn/lite/docs/en/r2.7.2/use/downloads.html) of the corresponding platform from the MindSpore Lite official website. In chapter [Generating Model Inference Code](#generating-model-inference-code), we obtain the model inference code of the Linux platform with the x86_64 architecture. The `Micro` lib on which the code depends is the release package used by the conversion tool. In the release package, the following content depended by the inference code: @@ -523,7 +523,7 @@ Users can refer to the benchmark routine to integrate and develop the `src` infe ### Calling Interface of Inference Code -The following is the general calling interface of the inference code. For a detailed description of the interface, please refer to the [API documentation](https://www.mindspore.cn/lite/api/en/r2.7.1/index.html). +The following is the general calling interface of the inference code. For a detailed description of the interface, please refer to the [API documentation](https://www.mindspore.cn/lite/api/en/r2.7.2/index.html). Table 3: Inference Common API Interface @@ -559,9 +559,9 @@ Different platforms have differences in code integration and compilation deploym - For the MCU of the cortex-M architecture, see [Performing Inference on the MCU](#performing-inference-on-the-mcu) -- For the Linux platform with the x86_64 architecture, see [Compilation and Deployment on Linux_x86_64 Platform](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.1/mindspore-lite/examples/quick_start_micro/mnist_x86) +- For the Linux platform with the x86_64 architecture, see [Compilation and Deployment on Linux_x86_64 Platform](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.2/mindspore-lite/examples/quick_start_micro/mnist_x86) -- For details about how to compile and deploy arm32 or arm64 on the Android platform, see [Compilation and Deployment on Android Platform](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.1/mindspore-lite/examples/quick_start_micro/mobilenetv2_arm64) +- For details about how to compile and deploy arm32 or arm64 on the Android platform, see [Compilation and Deployment on Android Platform](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.2/mindspore-lite/examples/quick_start_micro/mobilenetv2_arm64) - For compilation and deployment on the OpenHarmony platform, see [Executing Inference on Light Harmony Devices](#executing-inference-on-light-harmony-devices) @@ -619,11 +619,11 @@ mnist # Specified name of generated code root directory The STM32F767 uses the Cortex-M7 architecture. You can obtain the `Micro` lib of the architecture in either of the following ways: -- Download [Release Version](https://www.mindspore.cn/lite/docs/en/r2.7.1/use/downloads.html) from the MindSpore Lite official website. +- Download [Release Version](https://www.mindspore.cn/lite/docs/en/r2.7.2/use/downloads.html) from the MindSpore Lite official website. You need to download the release package whose OS is None and hardware platform is Cortex-M7. -- Start from the source code for [Building MindSpore Lite](https://www.mindspore.cn/lite/docs/en/r2.7.1/build/build.html). +- Start from the source code for [Building MindSpore Lite](https://www.mindspore.cn/lite/docs/en/r2.7.2/build/build.html). You can run the `MSLITE_MICRO_PLATFORM=cortex-m7 bash build.sh -I x86_64` command to compile the Cortex-M7 release package. @@ -1004,7 +1004,7 @@ For details about how to develop light Harmony applications, see [Running Hello └── src ``` -Download the [precompiled inference runtime package](https://www.mindspore.cn/lite/docs/en/r2.7.1/use/downloads.html) for OpenHarmony and decompress it to any Harmony source code path. Compile BUILD.gn file: +Download the [precompiled inference runtime package](https://www.mindspore.cn/lite/docs/en/r2.7.2/use/downloads.html) for OpenHarmony and decompress it to any Harmony source code path. Compile BUILD.gn file: ```text import("//build/lite/config/component/lite_component.gni") @@ -1123,7 +1123,7 @@ name: int8toft32_Softmax-7_post0/output-0, DataType: 43, Elements: 10, Shape: [1 ## Custom Kernel -Please refer to [Custom Kernel](https://www.mindspore.cn/lite/docs/en/r2.7.1/advanced/third_party/register.html) to understand the basic concepts before using. +Please refer to [Custom Kernel](https://www.mindspore.cn/lite/docs/en/r2.7.2/advanced/third_party/register.html) to understand the basic concepts before using. Micro currently only supports the registration and implementation of custom operators of custom type, and does not support the registration and custom implementation of built-in operators (such as conv2d and fc). We use Hi3516D board as an example to show you how to use kernel register in Micro. @@ -1155,7 +1155,7 @@ The previous step generates the source code directory under the specified path w int CustomKernel(TensorC *inputs, int input_num, TensorC *outputs, int output_num, CustomParameter *param); ``` -Users need to implement this function and add their source files to the cmake project. For example, we provide the custom kernel example dynamic library libmicro_nnie.so that supports NNIE from Hysis, which is included in the [official download page](https://www.mindspore.cn/lite/docs/en/r2.7.1/use/downloads.html) "NNIE inference runtime lib, benchmark tool" component. Users need to modify the CMakeLists.txt of the generated code, add the name and path of the linked library. +Users need to implement this function and add their source files to the cmake project. For example, we provide the custom kernel example dynamic library libmicro_nnie.so that supports NNIE from Hysis, which is included in the [official download page](https://www.mindspore.cn/lite/docs/en/r2.7.2/use/downloads.html) "NNIE inference runtime lib, benchmark tool" component. Users need to modify the CMakeLists.txt of the generated code, add the name and path of the linked library. ```shell @@ -1167,7 +1167,7 @@ target_link_libraries(benchmark net micro_nnie nnie mpi VoiceEngine upvqe dnvqe ``` -In the generated `benchmark/benchmark.c` file, add the [NNIE device related initialization code](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/test/config_level0/micro/svp_sys_init.c) before and after calling the main function. +In the generated `benchmark/benchmark.c` file, add the [NNIE device related initialization code](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/test/config_level0/micro/svp_sys_init.c) before and after calling the main function. Finally, we compile the source code: ```shell @@ -1200,7 +1200,7 @@ Except for MCU, micro inference is a inference model that separates model struct ### Exporting Inference Model -Users can directly refer to [Device-side training](https://www.mindspore.cn/lite/docs/en/r2.7.1/train/runtime_train_cpp.html). +Users can directly refer to [Device-side training](https://www.mindspore.cn/lite/docs/en/r2.7.2/train/runtime_train_cpp.html). ### Generating Inference Code diff --git a/docs/lite/docs/source_en/advanced/quantization.md b/docs/lite/docs/source_en/advanced/quantization.md index f125d8052d..52ebbd5cc0 100644 --- a/docs/lite/docs/source_en/advanced/quantization.md +++ b/docs/lite/docs/source_en/advanced/quantization.md @@ -1,6 +1,6 @@ # Quantization -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/docs/source_en/advanced/quantization.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/docs/source_en/advanced/quantization.md) ## Overview @@ -114,7 +114,7 @@ For the scenarios where the CV model needs to improve the model running speed an To fully quantize the quantization parameters for calculating the activation values, the user needs to provide a calibration dataset. The calibration dataset should preferably come from real inference scenarios that characterize the actual inputs to the model, in the order of 100 - 500, **and the calibration dataset needs to be processed into `NHWC` format**. -For image data, it currently supports the functions of channel adjustment, normalization, scaling, cropping and other preprocessing. The user can set the appropriate [Data Preprocessing Parameters](https://www.mindspore.cn/lite/docs/en/r2.7.1/advanced/quantization.html#data-preprocessing-parameters) according to the preprocessing operation required for inference. +For image data, it currently supports the functions of channel adjustment, normalization, scaling, cropping and other preprocessing. The user can set the appropriate [Data Preprocessing Parameters](https://www.mindspore.cn/lite/docs/en/r2.7.2/advanced/quantization.html#data-preprocessing-parameters) according to the preprocessing operation required for inference. User configuration of full quantization requires at least `[common_quant_param]`, `[data_preprocess_param]`, and `[full_quant_param]`. @@ -223,7 +223,7 @@ target_device=DSP #### Ascend -Ascend quantization needs to configure Ascend-related configuration at [offline conversion](https://www.mindspore.cn/lite/docs/en/r2.7.1/mindir/converter_tool.html#description-of-parameters) first, i.e. `optimize` needs to be set to `ascend_oriented`, and then configure Ascend related environment variables during conversion. +Ascend quantization needs to configure Ascend-related configuration at [offline conversion](https://www.mindspore.cn/lite/docs/en/r2.7.2/mindir/converter_tool.html#description-of-parameters) first, i.e. `optimize` needs to be set to `ascend_oriented`, and then configure Ascend related environment variables during conversion. **Ascend Fully Quantized Static Shape Parameter Configuration** @@ -245,7 +245,7 @@ Ascend quantization needs to configure Ascend-related configuration at [offline target_device=ASCEND ``` -**Ascend full quantization supports dynamic Shape parameters**. The conversion command needs to set the same inputShape of the calibration dataset, which can be found in [Conversion Tool Parameter Description](https://www.mindspore.cn/lite/docs/en/r2.7.1/mindir/converter_tool.html#description-of-parameters). +**Ascend full quantization supports dynamic Shape parameters**. The conversion command needs to set the same inputShape of the calibration dataset, which can be found in [Conversion Tool Parameter Description](https://www.mindspore.cn/lite/docs/en/r2.7.2/mindir/converter_tool.html#description-of-parameters). - The general form of the conversion command in the Ascend fully quantized static shape scenario is: @@ -301,7 +301,7 @@ quant_strategy=ACWL ## Configuration Parameter -Post training quantization can be enabled by configuring `configFile` through [Conversion Tool](https://www.mindspore.cn/lite/docs/en/r2.7.1/converter/converter_tool.html). The configuration file adopts the style of [`INI`](https://en.wikipedia.org/wiki/INI_file). For quantization, configurable parameters include: +Post training quantization can be enabled by configuring `configFile` through [Conversion Tool](https://www.mindspore.cn/lite/docs/en/r2.7.2/converter/converter_tool.html). The configuration file adopts the style of [`INI`](https://en.wikipedia.org/wiki/INI_file). For quantization, configurable parameters include: - `[common_quant_param]: Public quantization parameters` - `[weight_quant_param]: Fixed bit quantization parameters` diff --git a/docs/lite/docs/source_en/advanced/third_party.rst b/docs/lite/docs/source_en/advanced/third_party.rst index d11a98d645..6df0dfca71 100644 --- a/docs/lite/docs/source_en/advanced/third_party.rst +++ b/docs/lite/docs/source_en/advanced/third_party.rst @@ -1,8 +1,8 @@ Third-party Access ================================= -.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg - :target: https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/docs/source_en/advanced/third_party.rst +.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg + :target: https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/docs/source_en/advanced/third_party.rst :alt: View Source On Gitee .. toctree:: diff --git a/docs/lite/docs/source_en/advanced/third_party/ascend_info.md b/docs/lite/docs/source_en/advanced/third_party/ascend_info.md index e2af8593b7..0c368a9de7 100644 --- a/docs/lite/docs/source_en/advanced/third_party/ascend_info.md +++ b/docs/lite/docs/source_en/advanced/third_party/ascend_info.md @@ -1,11 +1,11 @@ # Integrated Ascend -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/docs/source_en/advanced/third_party/ascend_info.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/docs/source_en/advanced/third_party/ascend_info.md) > - The Ascend backend support in the device-side version will be deprecated. For related usage of the Ascend backend, please refer to the cloud-side inference version documentation. -> - [Build Cloud-side MindSpore Lite](https://mindspore.cn/lite/docs/en/r2.7.1/mindir/build.html) -> - [Cloud-side Model Converter](https://mindspore.cn/lite/docs/en/r2.7.1/mindir/converter.html) -> - [Cloud-side Benchmark Tool](https://mindspore.cn/lite/docs/en/r2.7.1/mindir/benchmark.html) +> - [Build Cloud-side MindSpore Lite](https://mindspore.cn/lite/docs/en/r2.7.2/mindir/build.html) +> - [Cloud-side Model Converter](https://mindspore.cn/lite/docs/en/r2.7.2/mindir/converter.html) +> - [Cloud-side Benchmark Tool](https://mindspore.cn/lite/docs/en/r2.7.2/mindir/benchmark.html) This document describes how to use MindSpore Lite to perform inference and use the dynamic shape function on Linux in the Ascend environment. Currently, MindSpore Lite supports the Atlas 200/300/500 inference product and Atlas inference series. @@ -75,7 +75,7 @@ export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} MindSpore Lite provides an offline model converter to convert various models (Caffe, ONNX, TensorFlow, and MindIR) into models that can be inferred on the Ascend hardware. First, use the converter to convert a model into an `ms` model. Then, use the runtime inference framework matching the converter to perform inference. The process is as follows: -1. [Download](https://www.mindspore.cn/lite/docs/en/r2.7.1/use/downloads.html) the converter dedicated for Ascend. Currently, only Linux is supported. +1. [Download](https://www.mindspore.cn/lite/docs/en/r2.7.2/use/downloads.html) the converter dedicated for Ascend. Currently, only Linux is supported. 2. Decompress the downloaded package. @@ -115,7 +115,7 @@ First, use the converter to convert a model into an `ms` model. Then, use the ru CONVERT RESULT SUCCESS:0 ``` - For details about parameters of the converter_lite converter, see ["Parameter Description" in Converting Models for Inference](https://www.mindspore.cn/lite/docs/en/r2.7.1/converter/converter_tool.html#parameter-description). + For details about parameters of the converter_lite converter, see ["Parameter Description" in Converting Models for Inference](https://www.mindspore.cn/lite/docs/en/r2.7.2/converter/converter_tool.html#parameter-description). Note: If the input shape of the original model is uncertain, specify inputShape when using the converter to convert a model. In addition, set configFile to the value of input_shape_vector parameter in acl_option_cfg_param. The command is as follows: @@ -145,12 +145,12 @@ Table 1 [acl_option_cfg_param] parameter configuration ## Runtime -After obtaining the converted model, use the matching runtime inference framework to perform inference. For details about how to use runtime to perform inference, see [Using C++ Interface to Perform Inference](https://www.mindspore.cn/lite/docs/en/r2.7.1/infer/runtime_cpp.html). +After obtaining the converted model, use the matching runtime inference framework to perform inference. For details about how to use runtime to perform inference, see [Using C++ Interface to Perform Inference](https://www.mindspore.cn/lite/docs/en/r2.7.2/infer/runtime_cpp.html). ## Executing the Benchmark MindSpore Lite provides a benchmark test tool, which can be used to perform quantitative (performance) analysis on the execution time consumed by forward inference of the MindSpore Lite model. In addition, you can perform comparative error (accuracy) analysis based on the output of a specified model. -For details about the inference tool, see [benchmark](https://www.mindspore.cn/lite/docs/en/r2.7.1/tools/benchmark_tool.html). +For details about the inference tool, see [benchmark](https://www.mindspore.cn/lite/docs/en/r2.7.2/tools/benchmark_tool.html). - Performance analysis @@ -170,7 +170,7 @@ For details about the inference tool, see [benchmark](https://www.mindspore.cn/l ### Dynamic Shape -The batch size is not fixed in certain scenarios. For example, in the target detection+facial recognition cascade scenario, the number of detected targets is subject to change, which means that the batch size of the targeted recognition input is dynamic. It would be a great waste of compute resources to perform inferences using the maximum batch size or image size. Thanks to Lite's support for dynamic batch size and dynamic image size on the Atlas 200/300/500 inference product, you can configure the [acl_option_cfg_param] dynamic parameter through configFile to convert a model into an `ms` model, and then use the [resize](https://www.mindspore.cn/lite/docs/en/r2.7.1/infer/runtime_cpp.html#resizing-the-input-dimension) function of the model to change the input shape during inference. +The batch size is not fixed in certain scenarios. For example, in the target detection+facial recognition cascade scenario, the number of detected targets is subject to change, which means that the batch size of the targeted recognition input is dynamic. It would be a great waste of compute resources to perform inferences using the maximum batch size or image size. Thanks to Lite's support for dynamic batch size and dynamic image size on the Atlas 200/300/500 inference product, you can configure the [acl_option_cfg_param] dynamic parameter through configFile to convert a model into an `ms` model, and then use the [resize](https://www.mindspore.cn/lite/docs/en/r2.7.2/infer/runtime_cpp.html#resizing-the-input-dimension) function of the model to change the input shape during inference. #### Dynamic Batch Size @@ -204,7 +204,7 @@ The batch size is not fixed in certain scenarios. For example, in the target det - Inference - After the dynamic batch size is enabled, during model inference, the input shape is corresponding to the size configured in converter. To change the input shape, use the model [resize](https://www.mindspore.cn/lite/docs/en/r2.7.1/infer/runtime_cpp.html#resizing-the-input-dimension) function. + After the dynamic batch size is enabled, during model inference, the input shape is corresponding to the size configured in converter. To change the input shape, use the model [resize](https://www.mindspore.cn/lite/docs/en/r2.7.2/infer/runtime_cpp.html#resizing-the-input-dimension) function. - Precautions @@ -245,7 +245,7 @@ The batch size is not fixed in certain scenarios. For example, in the target det - Inference - After the dynamic image size is enabled, during model inference, the input shape is corresponding to the size configured in converter. To change the input shape, use the model [resize](https://www.mindspore.cn/lite/docs/en/r2.7.1/infer/runtime_cpp.html#resizing-the-input-dimension) function. + After the dynamic image size is enabled, during model inference, the input shape is corresponding to the size configured in converter. To change the input shape, use the model [resize](https://www.mindspore.cn/lite/docs/en/r2.7.2/infer/runtime_cpp.html#resizing-the-input-dimension) function. - Precautions @@ -255,4 +255,4 @@ The batch size is not fixed in certain scenarios. For example, in the target det ## Supported Operators -For details about the supported operators, see [Lite Operator List](https://www.mindspore.cn/lite/docs/en/r2.7.1/reference/operator_list_lite.html). +For details about the supported operators, see [Lite Operator List](https://www.mindspore.cn/lite/docs/en/r2.7.2/reference/operator_list_lite.html). diff --git a/docs/lite/docs/source_en/advanced/third_party/asic.rst b/docs/lite/docs/source_en/advanced/third_party/asic.rst index e887bc68a1..88f997f2dd 100644 --- a/docs/lite/docs/source_en/advanced/third_party/asic.rst +++ b/docs/lite/docs/source_en/advanced/third_party/asic.rst @@ -1,8 +1,8 @@ Application Specific Integrated Circuit Integration Instructions ================================================================ -.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg - :target: https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/docs/source_en/advanced/third_party/asic.rst +.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg + :target: https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/docs/source_en/advanced/third_party/asic.rst :alt: View Source On Gitee .. toctree:: diff --git a/docs/lite/docs/source_en/advanced/third_party/converter_register.md b/docs/lite/docs/source_en/advanced/third_party/converter_register.md index ada1946feb..37e8f25a3b 100644 --- a/docs/lite/docs/source_en/advanced/third_party/converter_register.md +++ b/docs/lite/docs/source_en/advanced/third_party/converter_register.md @@ -1,20 +1,20 @@ # Building Custom Operators Offline -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/docs/source_en/advanced/third_party/converter_register.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/docs/source_en/advanced/third_party/converter_register.md) ## Overview -MindSpore Lite [Conversion Tool](https://www.mindspore.cn/lite/docs/en/r2.7.1/converter/converter_tool.html), in addition to the basic model conversion function, also supports user-defined model optimization and construction to generate models with user-defined operators. +MindSpore Lite [Conversion Tool](https://www.mindspore.cn/lite/docs/en/r2.7.2/converter/converter_tool.html), in addition to the basic model conversion function, also supports user-defined model optimization and construction to generate models with user-defined operators. We have designed a set of registration mechanisms, which allows users to expand, including node-parse extension, model-parse extension and graph-optimization extension. The users can combined them as needed to achieve their own intention. -node-parse extension: The users can define the process to parse a certain node of a model by themselves, which only support ONNX, CAFFE, TF and TFLITE. The related interface is [NodeParser](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_converter_NodeParser.html), [NodeParserRegistry](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_registry_NodeParserRegistry.html). -model-parse extension: The users can define the process to parse a model by themselves, which only support ONNX, CAFFE, TF and TFLITE. The related interface is [ModelParser](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_converter_ModelParser.html), [ModelParserRegistry](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_registry_ModelParserRegistry.html). -graph-optimization extension: After parsing a model, a graph structure defined by MindSpore Lite will show up and then, the users can define the process to optimize the parsed graph. The related interfaces are [PassBase](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_registry_PassBase.html), [PassPosition](https://mindspore.cn/lite/api/en/r2.7.1/generate/enum_mindspore_registry_PassPosition-1.html), [PassRegistry](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_registry_PassRegistry.html). +node-parse extension: The users can define the process to parse a certain node of a model by themselves, which only support ONNX, CAFFE, TF and TFLITE. The related interface is [NodeParser](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_converter_NodeParser.html), [NodeParserRegistry](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_registry_NodeParserRegistry.html). +model-parse extension: The users can define the process to parse a model by themselves, which only support ONNX, CAFFE, TF and TFLITE. The related interface is [ModelParser](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_converter_ModelParser.html), [ModelParserRegistry](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_registry_ModelParserRegistry.html). +graph-optimization extension: After parsing a model, a graph structure defined by MindSpore Lite will show up and then, the users can define the process to optimize the parsed graph. The related interfaces are [PassBase](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_registry_PassBase.html), [PassPosition](https://mindspore.cn/lite/api/en/r2.7.2/generate/enum_mindspore_registry_PassPosition-1.html), [PassRegistry](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_registry_PassRegistry.html). -> The node-parse extension needs to rely on the flatbuffers, protobuf and the serialization files of third-party frameworks, at the same time, the version of flatbuffers and the protobuf needs to be consistent with that of the released package, the serialized files must be compatible with that used by the released package. Note that the flatbuffers, protobuf and the serialization files are not provided in the released package, users need to compile and generate the serialized files by themselves. The users can obtain the basic information about [flatbuffers](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/cmake/external_libs/flatbuffers.cmake), [protobuf](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/cmake/external_libs/protobuf.cmake), [ONNX prototype file](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.1/third_party/proto/onnx), [CAFFE prototype file](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.1/third_party/proto/caffe), [TF prototype file](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.1/third_party/proto/tensorflow) and [TFLITE prototype file](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/tools/converter/parser/tflite/schema.fbs) from the [MindSpore WareHouse](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.1). +> The node-parse extension needs to rely on the flatbuffers, protobuf and the serialization files of third-party frameworks, at the same time, the version of flatbuffers and the protobuf needs to be consistent with that of the released package, the serialized files must be compatible with that used by the released package. Note that the flatbuffers, protobuf and the serialization files are not provided in the released package, users need to compile and generate the serialized files by themselves. The users can obtain the basic information about [flatbuffers](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/cmake/external_libs/flatbuffers.cmake), [protobuf](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/cmake/external_libs/protobuf.cmake), [ONNX prototype file](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.2/third_party/proto/onnx), [CAFFE prototype file](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.2/third_party/proto/caffe), [TF prototype file](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.2/third_party/proto/tensorflow) and [TFLITE prototype file](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/tools/converter/parser/tflite/schema.fbs) from the [MindSpore WareHouse](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.1). > -> MindSpore Lite alse provides a series of registration macros to facilitate user access. These macros include node-parse registration [REG_NODE_PARSER](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/define_node_parser_registry.h_REG_NODE_PARSER-1.html), model-parse registration [REG_MODEL_PARSER](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/define_model_parser_registry.h_REG_MODEL_PARSER-1.html), graph-optimization registration [REG_PASS](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/define_pass_registry.h_REG_PASS-1.html) and graph-optimization scheduled registration [REG_SCHEDULED_PASS](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/define_pass_registry.h_REG_SCHEDULED_PASS-1.html) +> MindSpore Lite alse provides a series of registration macros to facilitate user access. These macros include node-parse registration [REG_NODE_PARSER](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/define_node_parser_registry.h_REG_NODE_PARSER-1.html), model-parse registration [REG_MODEL_PARSER](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/define_model_parser_registry.h_REG_MODEL_PARSER-1.html), graph-optimization registration [REG_PASS](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/define_pass_registry.h_REG_PASS-1.html) and graph-optimization scheduled registration [REG_SCHEDULED_PASS](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/define_pass_registry.h_REG_SCHEDULED_PASS-1.html) The expansion capability of MindSpore Lite conversion tool only supports on Linux system currently. @@ -22,15 +22,15 @@ In this chapter, we will show the users a sample of extending MindSpore Lite con > Due to that model-parse extension is a modular extension ability, the chapter will not introduce in details. However, we still provide the users with a simplified unit case for inference. -The chapter takes a [add.tflite](https://download.mindspore.cn/model_zoo/official/lite/quick_start/add.tflite), which only includes an operator of adding, as an example. We will show the users how to convert the single operator of adding to that of [Custom](https://www.mindspore.cn/lite/docs/en/r2.7.1/advanced/third_party/register_kernel.html#custom-operators) and finally obtain a model which only includes a single operator of custom. +The chapter takes a [add.tflite](https://download.mindspore.cn/model_zoo/official/lite/quick_start/add.tflite), which only includes an operator of adding, as an example. We will show the users how to convert the single operator of adding to that of [Custom](https://www.mindspore.cn/lite/docs/en/r2.7.2/advanced/third_party/register_kernel.html#custom-operators) and finally obtain a model which only includes a single operator of custom. -The code related to the example can be obtained from the path [mindspore-lite/examples/converter_extend](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.1/mindspore-lite/examples/converter_extend). +The code related to the example can be obtained from the path [mindspore-lite/examples/converter_extend](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.2/mindspore-lite/examples/converter_extend). ## Node Extension -1. Self-defined node-parse: The users need to inherit the base class [NodeParser](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_converter_NodeParser.html), and then, choose a interface to override according to model frameworks. +1. Self-defined node-parse: The users need to inherit the base class [NodeParser](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_converter_NodeParser.html), and then, choose a interface to override according to model frameworks. -2. Node-parse Registration: The users can directly call the registration interface [REG_NODE_PARSER](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/define_node_parser_registry.h_REG_NODE_PARSER-1.html), so that the self-defined node-parse will be registered in the converter tool of MindSpore Lite. +2. Node-parse Registration: The users can directly call the registration interface [REG_NODE_PARSER](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/define_node_parser_registry.h_REG_NODE_PARSER-1.html), so that the self-defined node-parse will be registered in the converter tool of MindSpore Lite. ```c++ class AddParserTutorial : public NodeParser { // inherit the base class @@ -45,17 +45,17 @@ class AddParserTutorial : public NodeParser { // inherit the base class REG_NODE_PARSER(kFmkTypeTflite, ADD, std::make_shared()); // call the registration interface ``` -For the sample code, please refer to [node_parser](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.1/mindspore-lite/examples/converter_extend/node_parser). +For the sample code, please refer to [node_parser](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.2/mindspore-lite/examples/converter_extend/node_parser). ## Model Extension -For the sample code, please refer to the unit case [ModelParserRegistryTest](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/test/ut/tools/converter/registry/model_parser_registry_test.cc). +For the sample code, please refer to the unit case [ModelParserRegistryTest](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/test/ut/tools/converter/registry/model_parser_registry_test.cc). ### Optimization Extension -1. Self-defined Pass: The users need to inherit the base class [PassBase](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_registry_PassBase.html), and override the interface function [Execute](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_dataset_Execute.html). +1. Self-defined Pass: The users need to inherit the base class [PassBase](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_registry_PassBase.html), and override the interface function [Execute](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_dataset_Execute.html). -2. Pass Registration: The users can directly call the registration interface [REG_PASS](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/define_pass_registry.h_REG_PASS-1.html), so that the self-defined pass can be registered in the converter tool of MindSpore Lite. +2. Pass Registration: The users can directly call the registration interface [REG_PASS](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/define_pass_registry.h_REG_PASS-1.html), so that the self-defined pass can be registered in the converter tool of MindSpore Lite. ```c++ class PassTutorial : public registry::PassBase { // inherit the base class @@ -75,9 +75,9 @@ REG_PASS(PassTutorial, opt::PassTutorial) // register PassBase's sub REG_SCHEDULED_PASS(POSITION_BEGIN, {"PassTutorial"}) // register scheduling logic ``` -For the sample code, please refer to [pass](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.1/mindspore-lite/examples/converter_extend/pass). +For the sample code, please refer to [pass](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.2/mindspore-lite/examples/converter_extend/pass). -> In the offline phase of conversion, we will infer the basic information of output tensors of each node of the model, including the format, data type and shape. So, in this phase, users need to provide the inferring process of self-defined operator. Here, users can refer to [Operator Infershape Extension](https://www.mindspore.cn/lite/docs/en/r2.7.1/infer/runtime_cpp.html#operator-infershape-extension), and the sample code can be found in [infer](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.1/mindspore-lite/examples/converter_extend/infer). +> In the offline phase of conversion, we will infer the basic information of output tensors of each node of the model, including the format, data type and shape. So, in this phase, users need to provide the inferring process of self-defined operator. Here, users can refer to [Operator Infershape Extension](https://www.mindspore.cn/lite/docs/en/r2.7.2/infer/runtime_cpp.html#operator-infershape-extension), and the sample code can be found in [infer](https://gitee.com/mindspore/mindspore-lite/tree/r2.7.2/mindspore-lite/examples/converter_extend/infer). ## Example @@ -92,21 +92,21 @@ For the sample code, please refer to [pass](https://gitee.com/mindspore/mindspor - Compilation preparation - The release package of MindSpore Lite doesn't provide serialized files of other frameworks, therefore, users need to compile and obtain by yourselves. Here, please refer to [Overview](https://www.mindspore.cn/lite/docs/en/r2.7.1/advanced/third_party/converter_register.html#overview). + The release package of MindSpore Lite doesn't provide serialized files of other frameworks, therefore, users need to compile and obtain by yourselves. Here, please refer to [Overview](https://www.mindspore.cn/lite/docs/en/r2.7.2/advanced/third_party/converter_register.html#overview). - The case is a tflite model, users need to compile [flatbuffers](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/cmake/external_libs/flatbuffers.cmake) and combine the [TFLITE Proto File](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/tools/converter/parser/tflite/schema.fbs) to generate the serialized file. + The case is a tflite model, users need to compile [flatbuffers](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/cmake/external_libs/flatbuffers.cmake) and combine the [TFLITE Proto File](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/tools/converter/parser/tflite/schema.fbs) to generate the serialized file. After generating, users need to create a directory `schema` under the directory of `mindspore-lite/examples/converter_extend` and then place the serialized file in it. - Compilation and Build - Execute the script [build.sh](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/examples/converter_extend/build.sh) in the directory of `mindspore-lite/examples/converter_extend`. And then, the released package of MindSpore Lite will be downloaded and the demo will be compiled automatically. + Execute the script [build.sh](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/examples/converter_extend/build.sh) in the directory of `mindspore-lite/examples/converter_extend`. And then, the released package of MindSpore Lite will be downloaded and the demo will be compiled automatically. ```bash bash build.sh ``` - > If the automatic download is failed, users can download the specified package manually, of which the hardware platform is CPU and the system is Ubuntu-x64 [mindspore-lite-{version}-linux-x64.tar.gz](https://www.mindspore.cn/lite/docs/en/r2.7.1/use/downloads.html), After unzipping, please copy the directory of `tools/converter/lib` and `tools/converter/include` to the directory of `mindspore-lite/examples/converter_extend`. + > If the automatic download is failed, users can download the specified package manually, of which the hardware platform is CPU and the system is Ubuntu-x64 [mindspore-lite-{version}-linux-x64.tar.gz](https://www.mindspore.cn/lite/docs/en/r2.7.2/use/downloads.html), After unzipping, please copy the directory of `tools/converter/lib` and `tools/converter/include` to the directory of `mindspore-lite/examples/converter_extend`. > > After manually downloading and storing the specified file, users need to execute the `build.sh` script to complete the compilation and build process. diff --git a/docs/lite/docs/source_en/advanced/third_party/delegate.md b/docs/lite/docs/source_en/advanced/third_party/delegate.md index 2733f2d9e9..6a9f4a6e66 100644 --- a/docs/lite/docs/source_en/advanced/third_party/delegate.md +++ b/docs/lite/docs/source_en/advanced/third_party/delegate.md @@ -1,6 +1,6 @@ # Using Delegate to Support Third-party AI Framework (Device) -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/docs/source_en/advanced/third_party/delegate.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/docs/source_en/advanced/third_party/delegate.md) ## Overview @@ -10,14 +10,14 @@ Delegate of MindSpore Lite is used to support third-party AI frameworks (such as Using Delegate to support a third-party AI framework mainly includes the following steps: -1. Add a custom delegate class: Inherit the [Delegate](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Delegate.html) class to implement XXXDelegate. -2. Implementing the Init Function: The [Init](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Delegate.html) function needs to check whether the device supports the delegate framework and to apply for resources related to delegate. -3. Implementing the Build Function: The [Build](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Delegate.html) function will implement the kernel support judgment, the sub-graph construction, and the online graph building. -4. Implementing the sub-graph Kernel: Inherit the [Kernel](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_kernel_Kernel.html#class-kernel) to implement delegate sub-graph Kernel. +1. Add a custom delegate class: Inherit the [Delegate](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Delegate.html) class to implement XXXDelegate. +2. Implementing the Init Function: The [Init](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Delegate.html) function needs to check whether the device supports the delegate framework and to apply for resources related to delegate. +3. Implementing the Build Function: The [Build](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Delegate.html) function will implement the kernel support judgment, the sub-graph construction, and the online graph building. +4. Implementing the sub-graph Kernel: Inherit the [Kernel](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_kernel_Kernel.html#class-kernel) to implement delegate sub-graph Kernel. ### Adding a Custom Delegate Class -XXXDelegate should inherit from [Delegate](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Delegate.html). In the constructor of XXXDelegate, configure settings for third-party AI framework to build and execute the model, such as Kirin NPU frequency, CPU thread number, etc. +XXXDelegate should inherit from [Delegate](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Delegate.html). In the constructor of XXXDelegate, configure settings for third-party AI framework to build and execute the model, such as Kirin NPU frequency, CPU thread number, etc. ```cpp class XXXDelegate : public Delegate { @@ -34,7 +34,7 @@ class XXXDelegate : public Delegate { ### Implementing the Init -[Init](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Delegate.html) will be called during the [Build](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Model.html) process of [Model](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Model.html#class-model). The specific location is in the [LiteSession::Init](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/src/litert/lite_session.cc#L696) function of MindSpore Lite internal process. +[Init](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Delegate.html) will be called during the [Build](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Model.html) process of [Model](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Model.html#class-model). The specific location is in the [LiteSession::Init](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/src/litert/lite_session.cc#L696) function of MindSpore Lite internal process. ```cpp Status XXXDelegate::Init() { @@ -45,16 +45,16 @@ Status XXXDelegate::Init() { ### Implementing the Build -The input parameter of the [Build(DelegateModel *model)](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Delegate.html) interface is [DelegateModel](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_DelegateModel.html#template-class-delegatemodel). +The input parameter of the [Build(DelegateModel *model)](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Delegate.html) interface is [DelegateModel](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_DelegateModel.html#template-class-delegatemodel). -> [std::vector *kernels_](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_kernel_Kernel.html): A list of kernels that have been selected by MindSpore Lite and topologically sorted. +> [std::vector *kernels_](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_kernel_Kernel.html): A list of kernels that have been selected by MindSpore Lite and topologically sorted. > -> [const std::map primitives_](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_DelegateModel.html): A map of kernel and its attribute `schema::Primitive`, which is used to analyze the original attribute information. +> [const std::map primitives_](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_DelegateModel.html): A map of kernel and its attribute `schema::Primitive`, which is used to analyze the original attribute information. -[Build](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Delegate.html) will be called during the [Build](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Model.html) process of [Model](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Model.html#class-model). The specific location is in the [Schedule::Schedule](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/src/litert/scheduler.cc#L132) function of MindSpore Lite internal process. At this time, the inner kernels have been selected by MindSpore Lite. The following steps should be implemented in Build function: +[Build](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Delegate.html) will be called during the [Build](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Model.html) process of [Model](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Model.html#class-model). The specific location is in the [Schedule::Schedule](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/src/litert/scheduler.cc#L132) function of MindSpore Lite internal process. At this time, the inner kernels have been selected by MindSpore Lite. The following steps should be implemented in Build function: -1. Traverse the kernel list, use [GetPrimitive](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_DelegateModel.html) to get the attribute of kernel. Analyze the attribute to judge whether the delegate framework supports it. -2. For a continuous supported kernel list, construct a delegate sub-graph kernel and [Replace](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_DelegateModel.html) the continuous supported kernels with it. +1. Traverse the kernel list, use [GetPrimitive](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_DelegateModel.html) to get the attribute of kernel. Analyze the attribute to judge whether the delegate framework supports it. +2. For a continuous supported kernel list, construct a delegate sub-graph kernel and [Replace](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_DelegateModel.html) the continuous supported kernels with it. ```cpp Status XXXDelegate::Build(DelegateModel *model) { @@ -95,10 +95,10 @@ kernel::Kernel *XXXDelegate::CreateXXXGraph(KernelIter from, KernelIter end, Del } ``` -The delegate sub-graph kernel `XXXGraph` should inherit from [Kernel](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_kernel_Kernel.html#class-kernel). The realization of `XXXGraph` should focus on: +The delegate sub-graph kernel `XXXGraph` should inherit from [Kernel](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_kernel_Kernel.html#class-kernel). The realization of `XXXGraph` should focus on: 1. Find the correct in_tensors and out_tensors for `XXXGraph` according to the original kernels list. -2. Rewrite the Prepare, Resize, and Execute interfaces. [Prepare](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_kernel.html#prepare) will be called in [Build](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Model.html) of [Model](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Model.html#class-model). [Execute](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_kernel.html#execute) will be called in [Predict](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Model.html) of Model. [ReSize](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore_kernel.html#resize) will be called in [Resize](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Model.html) of Model. +2. Rewrite the Prepare, Resize, and Execute interfaces. [Prepare](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_kernel.html#prepare) will be called in [Build](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Model.html) of [Model](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Model.html#class-model). [Execute](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_kernel.html#execute) will be called in [Predict](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Model.html) of Model. [ReSize](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore_kernel.html#resize) will be called in [Resize](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Model.html) of Model. ```cpp class XXXGraph : public kernel::Kernel { @@ -127,7 +127,7 @@ class XXXGraph : public kernel::Kernel { ## Calling Delegate by Lite Framework -MindSpore Lite schedules user-defined delegate by [Context](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Context.html#class-context). Use [SetDelegate](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#setdelegate) to set a custom delegate for Context. Delegate will be passed by [Build](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Model.html) to MindSpore Lite. If the Delegate in the Context is a null pointer, the process will call the inner inference of MindSpore Lite. +MindSpore Lite schedules user-defined delegate by [Context](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Context.html#class-context). Use [SetDelegate](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#setdelegate) to set a custom delegate for Context. Delegate will be passed by [Build](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Model.html) to MindSpore Lite. If the Delegate in the Context is a null pointer, the process will call the inner inference of MindSpore Lite. ```cpp auto context = std::make_shared(); @@ -156,7 +156,7 @@ if (build_ret != mindspore::kSuccess) { ## Example of NPUDelegate -Currently, MindSpore Lite uses the [NPUDelegate](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/src/litert/delegate/npu/npu_delegate.h#L29) for the Kirin NPU backend. This tutorial gives a brief description of NPUDelegate, so that users can quickly understand the usage of Delegate APIs. +Currently, MindSpore Lite uses the [NPUDelegate](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/src/litert/delegate/npu/npu_delegate.h#L29) for the Kirin NPU backend. This tutorial gives a brief description of NPUDelegate, so that users can quickly understand the usage of Delegate APIs. ### Adding the NPUDelegate Class @@ -190,7 +190,7 @@ class NPUDelegate : public Delegate { ### Implementing the Init of NPUDelegate -[Init](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/src/litert/delegate/npu/npu_delegate.cc#L75) function is used to apply resource for Kirin NPU and determine whether the hardware supports Kirin NPU. +[Init](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/src/litert/delegate/npu/npu_delegate.cc#L75) function is used to apply resource for Kirin NPU and determine whether the hardware supports Kirin NPU. ```cpp Status NPUDelegate::Init() { @@ -217,7 +217,7 @@ Status NPUDelegate::Init() { ### Implementing the Build of NPUDelegate -The [Build](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/src/litert/delegate/npu/npu_delegate.cc#L163) interface parses the DelegateModel and mainly implements the kernel support judgment, the sub-graph construction, and the online graph building. +The [Build](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/src/litert/delegate/npu/npu_delegate.cc#L163) interface parses the DelegateModel and mainly implements the kernel support judgment, the sub-graph construction, and the online graph building. ```cpp Status NPUDelegate::Build(DelegateModel *model) { @@ -257,7 +257,7 @@ Status NPUDelegate::Build(DelegateModel *model) { ### Creating NPUGraph -The following [Sample Code](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/src/litert/delegate/npu/npu_delegate.cc#L273) is the CreateNPUGraph interface of NPUDelegate, used to generate a Kirin NPU sub-graph kernel. +The following [Sample Code](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/src/litert/delegate/npu/npu_delegate.cc#L273) is the CreateNPUGraph interface of NPUDelegate, used to generate a Kirin NPU sub-graph kernel. ```cpp kernel::Kernel *NPUDelegate::CreateNPUGraph(const std::vector &ops) { @@ -279,7 +279,7 @@ kernel::Kernel *NPUDelegate::CreateNPUGraph(const std::vector &ops) { ### Adding the NPUGraph Class -[NPUGraph](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/src/litert/delegate/npu/npu_graph.h#L29) inherits from [Kernel](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_kernel_Kernel.html#class-kernel). And we need to rewrite the Prepare, Execute, and ReSize interfaces. +[NPUGraph](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/src/litert/delegate/npu/npu_graph.h#L29) inherits from [Kernel](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_kernel_Kernel.html#class-kernel). And we need to rewrite the Prepare, Execute, and ReSize interfaces. ```cpp class NPUGraph : public kernel::Kernel { @@ -306,7 +306,7 @@ class NPUGraph : public kernel::Kernel { }; ``` -[NPUGraph::Prepare](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/src/litert/delegate/npu/npu_graph.cc#L306) mainly implements: +[NPUGraph::Prepare](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/src/litert/delegate/npu/npu_graph.cc#L306) mainly implements: ```cpp int NPUGraph::Prepare() { @@ -314,7 +314,7 @@ int NPUGraph::Prepare() { } ``` -[NPUGraph::Execute](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/src/litert/delegate/npu/npu_graph.cc#L322) mainly implements: +[NPUGraph::Execute](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/src/litert/delegate/npu/npu_graph.cc#L322) mainly implements: ```cpp int NPUGraph::Execute() { @@ -325,4 +325,4 @@ int NPUGraph::Execute() { } ``` -> [Kirin NPU](https://www.mindspore.cn/lite/docs/en/r2.7.1/advanced/third_party/npu_info.html) is a third-party AI framework that was added by MindSpore Lite internal developers. The usage of Kirin NPU is slightly different. You can set the [Context](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Context.html#class-context) through [SetDelegate](https://www.mindspore.cn/lite/api/zh-CN/r2.7.1/api_cpp/mindspore.html#setdelegate), or you can add the description of the Kirin NPU device [KirinNPUDeviceInfo](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_KirinNPUDeviceInfo.html#class-kirinnpudeviceinfo) to [MutableDeviceInfo](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Context.html) of the Context. +> [Kirin NPU](https://www.mindspore.cn/lite/docs/en/r2.7.2/advanced/third_party/npu_info.html) is a third-party AI framework that was added by MindSpore Lite internal developers. The usage of Kirin NPU is slightly different. You can set the [Context](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Context.html#class-context) through [SetDelegate](https://www.mindspore.cn/lite/api/zh-CN/r2.7.2/api_cpp/mindspore.html#setdelegate), or you can add the description of the Kirin NPU device [KirinNPUDeviceInfo](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_KirinNPUDeviceInfo.html#class-kirinnpudeviceinfo) to [MutableDeviceInfo](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Context.html) of the Context. diff --git a/docs/lite/docs/source_en/advanced/third_party/npu_info.md b/docs/lite/docs/source_en/advanced/third_party/npu_info.md index c6bd58e1f5..84585cfc2d 100644 --- a/docs/lite/docs/source_en/advanced/third_party/npu_info.md +++ b/docs/lite/docs/source_en/advanced/third_party/npu_info.md @@ -1,12 +1,12 @@ # Kirin NPU Integration Information -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/docs/source_en/advanced/third_party/npu_info.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/docs/source_en/advanced/third_party/npu_info.md) ## Steps ### Environment Preparation -Besides basic [Environment Preparation](https://www.mindspore.cn/lite/docs/en/r2.7.1/build/build.html), using the Kirin NPU requires the integration of the HUAWEI HiAI DDK. +Besides basic [Environment Preparation](https://www.mindspore.cn/lite/docs/en/r2.7.2/build/build.html), using the Kirin NPU requires the integration of the HUAWEI HiAI DDK. HUAWEI HiAI DDK, which contains APIs (including building, loading models and calculation processes) and interfaces implemented to encapsulate dynamic libraries (namely libhiai*.so), is required for the use of Kirin NPU. Download [DDK 100.510.010.010](https://developer.huawei.com/consumer/en/doc/development/hiai-Library/ddk-download-0000001053590180), and set the directory of extracted files as `${HWHIAI_DDK}`. Our build script uses this environment variable to seek DDK. @@ -20,7 +20,7 @@ export MSLITE_ENABLE_NPU=ON bash build.sh -I arm64 -j8 ``` -For more information about compilation, see [Linux Environment Compilation](https://www.mindspore.cn/lite/docs/en/r2.7.1/build/build.html#linux-environment-compilation). +For more information about compilation, see [Linux Environment Compilation](https://www.mindspore.cn/lite/docs/en/r2.7.2/build/build.html#linux-environment-compilation). ### Integration @@ -28,10 +28,10 @@ For more information about compilation, see [Linux Environment Compilation](http When developers need to integrate the use of Kirin NPU features, it is important to note: - - [Configure the Kirin NPU backend](https://www.mindspore.cn/lite/docs/en/r2.7.1/infer/runtime_cpp.html#configuring-the-kirin-npu-backend). - For more information about using Runtime to perform inference, see [Using Runtime to Perform Inference (C++)](https://www.mindspore.cn/lite/docs/en/r2.7.1/infer/runtime_cpp.html). + - [Configure the Kirin NPU backend](https://www.mindspore.cn/lite/docs/en/r2.7.2/infer/runtime_cpp.html#configuring-the-kirin-npu-backend). + For more information about using Runtime to perform inference, see [Using Runtime to Perform Inference (C++)](https://www.mindspore.cn/lite/docs/en/r2.7.2/infer/runtime_cpp.html). - - Compile and execute the binary. If you use dynamic linking, refer to [compile output](https://www.mindspore.cn/lite/docs/en/r2.7.1/build/build.html) when the compile option is `-I arm64` or `-I arm32`. + - Compile and execute the binary. If you use dynamic linking, refer to [compile output](https://www.mindspore.cn/lite/docs/en/r2.7.2/build/build.html) when the compile option is `-I arm64` or `-I arm32`. Configured environment variables will dynamically load libhiai.so, libhiai_ir.so, libhiai_ir_build.so, libhiai_hcl_model_runtime.so. For example, ```bash @@ -54,7 +54,7 @@ For more information about compilation, see [Linux Environment Compilation](http ./benchmark --device=NPU --modelFile=./models/test_benchmark.ms --inDataFile=./input/test_benchmark.bin --inputShapes=1,32,32,1 --accuracyThreshold=3 --benchmarkDataFile=./output/test_benchmark.out ``` -For more information about the use of Benchmark, see [Benchmark Use](https://www.mindspore.cn/lite/docs/en/r2.7.1/tools/benchmark_tool.html). +For more information about the use of Benchmark, see [Benchmark Use](https://www.mindspore.cn/lite/docs/en/r2.7.2/tools/benchmark_tool.html). For environment variable settings, you need to set the directory where the libmindspore-lite.so (under the directory `mindspore-lite-{version}-android-{arch}/runtime/lib`) and Kirin NPU libraries (under the directory `mindspore-lite-{version}-android-{arch}/runtime/third_party/hiai_ddk/lib/`) are located, to `${LD_LIBRARY_PATH}`. @@ -64,4 +64,4 @@ For supported Kirin NPU chips, see [Chipset Platforms and Supported HUAWEI HiAI ## Supported Operators -For supported Kirin NPU operators, see [Lite Operator List](https://www.mindspore.cn/lite/docs/en/r2.7.1/reference/operator_list_lite.html). \ No newline at end of file +For supported Kirin NPU operators, see [Lite Operator List](https://www.mindspore.cn/lite/docs/en/r2.7.2/reference/operator_list_lite.html). \ No newline at end of file diff --git a/docs/lite/docs/source_en/advanced/third_party/register.rst b/docs/lite/docs/source_en/advanced/third_party/register.rst index ede8744af1..44b89f768b 100644 --- a/docs/lite/docs/source_en/advanced/third_party/register.rst +++ b/docs/lite/docs/source_en/advanced/third_party/register.rst @@ -1,8 +1,8 @@ Custom Kernel =============== -.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg - :target: https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/docs/source_en/advanced/third_party/register.rst +.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg + :target: https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/docs/source_en/advanced/third_party/register.rst :alt: View Source On Gitee .. toctree:: diff --git a/docs/lite/docs/source_en/advanced/third_party/register_kernel.md b/docs/lite/docs/source_en/advanced/third_party/register_kernel.md index d663935f44..c91ad05b94 100644 --- a/docs/lite/docs/source_en/advanced/third_party/register_kernel.md +++ b/docs/lite/docs/source_en/advanced/third_party/register_kernel.md @@ -1,6 +1,6 @@ # Building Custom Operators Online -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/docs/source_en/advanced/third_party/register_kernel.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/docs/source_en/advanced/third_party/register_kernel.md) ## Implementing Custom Operators @@ -18,11 +18,11 @@ View the operator prototype definition in mindspore-lite/schema/ops.fbs. Check w ### Common Operators -For details about code related to implementation, registration, and InferShape of an operator, see [the code repository](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/test/ut/src/registry/registry_test.cc). +For details about code related to implementation, registration, and InferShape of an operator, see [the code repository](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/test/ut/src/registry/registry_test.cc). #### Implementing Common Operators -Inherit [mindspore::kernel::Kernel](https://www.mindspore.cn/lite/api/en/r2.7.1/api_cpp/mindspore_kernel.html) and overload necessary APIs. The following describes how to customize an Add operator: +Inherit [mindspore::kernel::Kernel](https://www.mindspore.cn/lite/api/en/r2.7.2/api_cpp/mindspore_kernel.html) and overload necessary APIs. The following describes how to customize an Add operator: 1. An operator inherits a kernel. 2. PreProcess() pre-allocates memory. @@ -74,7 +74,7 @@ int TestCustomAdd::Execute() { #### Registering Common Operators -Currently, the generated macro [REGISTER_KERNEL](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_registry_RegisterKernel.html) is provided for operator registration. The implementation procedure is as follows: +Currently, the generated macro [REGISTER_KERNEL](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_registry_RegisterKernel.html) is provided for operator registration. The implementation procedure is as follows: 1. The TestCustomAddCreator function is used to create a kernel. 2. Use the macro REGISTER_KERNEL to register the kernel. Assume that the vendor is BuiltInTest. @@ -96,7 +96,7 @@ REGISTER_KERNEL(CPU, BuiltInTest, kFloat32, PrimitiveType_AddFusion, TestCustomA Override the Infer function after inheriting KernelInterface to implement the InferShape capability. The implementation procedure is as follows: -1. Inherit [KernelInterface](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_kernel_KernelInterface.html). +1. Inherit [KernelInterface](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_kernel_KernelInterface.html). 2. Overload the Infer function to derive the shape, format, and data_type of the output tensor. The following uses the custom Add operator as an example: @@ -120,7 +120,7 @@ class TestCustomAddInfer : public KernelInterface { #### Registering the Common Operator InferShape -Currently, the generated macro [REGISTER_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_registry_RegisterKernelInterface.html) is provided for registering the operator InferShape. The procedure is as follows: +Currently, the generated macro [REGISTER_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_registry_RegisterKernelInterface.html) is provided for registering the operator InferShape. The procedure is as follows: 1. Use the CustomAddInferCreator function to create a KernelInterface instance. 2. Call the REGISTER_KERNEL_INTERFACE macro to register the common operator InferShape. Assume that the vendor is BuiltInTest. @@ -133,7 +133,7 @@ REGISTER_KERNEL_INTERFACE(BuiltInTest, PrimitiveType_AddFusion, CustomAddInferCr ### Custom Operators -For details about code related to parsing, creating, and operating custom operators, see [the code repository](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/test/ut/tools/converter/registry/pass_registry_test.cc). +For details about code related to parsing, creating, and operating custom operators, see [the code repository](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/test/ut/tools/converter/registry/pass_registry_test.cc). #### Defining Custom Operators @@ -220,11 +220,11 @@ REG_SCHEDULED_PASS(POSITION_BEGIN, schedule) // Set the external Pass sche } // namespace mindspore::opt ``` -For details about code related to implementation, registration, and InferShape of a custom operator, see [the code repository](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/test/ut/src/registry/registry_custom_op_test.cc). +For details about code related to implementation, registration, and InferShape of a custom operator, see [the code repository](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/test/ut/src/registry/registry_custom_op_test.cc). #### Implementing Custom Operators -The implementation procedure of a custom operator is the same as that of a common operator, because they are specific subclasses of [Kernel](https://www.mindspore.cn/lite/api/en/r2.7.1/api_cpp/mindspore_kernel.html). +The implementation procedure of a custom operator is the same as that of a common operator, because they are specific subclasses of [Kernel](https://www.mindspore.cn/lite/api/en/r2.7.2/api_cpp/mindspore_kernel.html). If the custom operator does not run on the CPU platform, the result needs to be copied back to the output tensor after the running is complete. The following describes how to create a custom operator with the Add capability: 1. An operator inherits a kernel. @@ -295,7 +295,7 @@ In the example, the byte stream in the attribute is copied to the buf. #### Registering Custom Operators -Currently, the generated macro [REGISTER_CUSTOM_KERNEL](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/define_register_kernel.h_REGISTER_CUSTOM_KERNEL-1.html) is provided for operator registration. The procedure is as follows: +Currently, the generated macro [REGISTER_CUSTOM_KERNEL](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/define_register_kernel.h_REGISTER_CUSTOM_KERNEL-1.html) is provided for operator registration. The procedure is as follows: 1. The TestCustomAddCreator function is used to create a kernel. 2. Use the macro REGISTER_CUSTOM_KERNEL to register an operator. Assume that the vendor is BuiltInTest and the operator type is Add. @@ -316,7 +316,7 @@ REGISTER_CUSTOM_KERNEL(CPU, BuiltInTest, kFloat32, Add, TestCustomAddCreator) The overall implementation is the same as that of the common operator InferShape. The procedure is as follows: -1. Inherit [KernelInterface](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_kernel_KernelInterface.html). +1. Inherit [KernelInterface](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_kernel_KernelInterface.html). 2. Overload the Infer function to derive the shape, format, and data_type of the output tensor. ```cpp @@ -336,10 +336,10 @@ class TestCustomOpInfer : public KernelInterface { #### Registering the Custom Operator InferShape -Currently, the generated macro [REGISTER_CUSTOM_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/define_register_kernel_interface.h_REGISTER_CUSTOM_KERNEL_INTERFACE-1.html) is provided for registering the custom operator InferShape. The procedure is as follows: +Currently, the generated macro [REGISTER_CUSTOM_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/define_register_kernel_interface.h_REGISTER_CUSTOM_KERNEL_INTERFACE-1.html) is provided for registering the custom operator InferShape. The procedure is as follows: 1. Use the CustomAddInferCreator function to create a custom KernelInterface. -2. The macro [REGISTER_CUSTOM_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/define_register_kernel_interface.h_REGISTER_CUSTOM_KERNEL_INTERFACE-1.html) is provided for registering the InferShape capability. The operator type Add must be the same as that in REGISTER_CUSTOM_KERNEL_INTERFACE. +2. The macro [REGISTER_CUSTOM_KERNEL_INTERFACE](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/define_register_kernel_interface.h_REGISTER_CUSTOM_KERNEL_INTERFACE-1.html) is provided for registering the InferShape capability. The operator type Add must be the same as that in REGISTER_CUSTOM_KERNEL_INTERFACE. ```cpp std::shared_ptr CustomAddInferCreator() { return std::make_shared(); } @@ -349,9 +349,9 @@ REGISTER_CUSTOM_KERNEL_INTERFACE(BuiltInTest, Add, CustomAddInferCreator) ## Custom GPU Operators -A set of GPU-related functional APIs are provided to facilitate the development of the GPU-based custom operator and enable the GPU-based custom operator to share the same resources with the internal GPU-based operators to improve the scheduling efficiency. For details about the APIs, see [mindspore::registry::opencl](https://www.mindspore.cn/lite/api/en/r2.7.1/api_cpp/mindspore_registry_opencl.html). +A set of GPU-related functional APIs are provided to facilitate the development of the GPU-based custom operator and enable the GPU-based custom operator to share the same resources with the internal GPU-based operators to improve the scheduling efficiency. For details about the APIs, see [mindspore::registry::opencl](https://www.mindspore.cn/lite/api/en/r2.7.2/api_cpp/mindspore_registry_opencl.html). This document describes how to develop a custom GPU operator by parsing sample code. Before reading this document, you need to understand [Implement Custom Operators](#implementing-custom-operators). -The [code repository](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.1/mindspore-lite/test/ut/src/registry/registry_gpu_custom_op_test.cc) contains implementation and registration of custom GPU operators. +The [code repository](https://gitee.com/mindspore/mindspore-lite/blob/r2.7.2/mindspore-lite/test/ut/src/registry/registry_gpu_custom_op_test.cc) contains implementation and registration of custom GPU operators. ### Registering Operators @@ -394,7 +394,7 @@ std::shared_ptr CustomAddCreator(const std::vector &in #### Registering Operators When registering GPU operators, you must declare the device type as GPU and transfer the operator instance creation function `CustomAddCreator` implemented in the previous step. -In this example, the Float32 implementation of the Custom_Add operator is registered. The registration code is as follows. For details about other parameters in the registration macro, see the [API](https://www.mindspore.cn/lite/api/en/r2.7.1/api_cpp/mindspore_registry.html). +In this example, the Float32 implementation of the Custom_Add operator is registered. The registration code is as follows. For details about other parameters in the registration macro, see the [API](https://www.mindspore.cn/lite/api/en/r2.7.2/api_cpp/mindspore_registry.html). ```cpp const auto kFloat32 = DataType::kNumberTypeFloat32; @@ -404,7 +404,7 @@ REGISTER_CUSTOM_KERNEL(GPU, BuiltInTest, kFloat32, Custom_Add, CustomAddCreator) ### Implementing Operators -In this example, the operator is implemented as the `CustomAddKernel` class. This class inherits [mindspore::kernel::Kernel](https://www.mindspore.cn/lite/api/en/r2.7.1/api_cpp/mindspore_kernel.html) and overloads necessary APIs to implement the custom operator computation. +In this example, the operator is implemented as the `CustomAddKernel` class. This class inherits [mindspore::kernel::Kernel](https://www.mindspore.cn/lite/api/en/r2.7.2/api_cpp/mindspore_kernel.html) and overloads necessary APIs to implement the custom operator computation. #### Constructor and Destructor Functions @@ -428,7 +428,7 @@ class CustomAddKernel : public kernel::Kernel { - opencl_runtime_ - An instance of the OpenCLRuntimeWrapper class. In an operator, this object can be used to call the OpenCL-related API [mindspore::registry::opencl](https://www.mindspore.cn/lite/api/en/r2.7.1/api_cpp/mindspore_registry_opencl.html) provided by MindSpore Lite. + An instance of the OpenCLRuntimeWrapper class. In an operator, this object can be used to call the OpenCL-related API [mindspore::registry::opencl](https://www.mindspore.cn/lite/api/en/r2.7.2/api_cpp/mindspore_registry_opencl.html) provided by MindSpore Lite. - fp16_enable_ @@ -440,7 +440,7 @@ class CustomAddKernel : public kernel::Kernel { - Other variables - Other variables are required for OpenCL operations. For details, see [mindspore::registry::opencl](https://www.mindspore.cn/lite/api/en/r2.7.1/api_cpp/mindspore_registry_opencl.html). + Other variables are required for OpenCL operations. For details, see [mindspore::registry::opencl](https://www.mindspore.cn/lite/api/en/r2.7.2/api_cpp/mindspore_registry_opencl.html). ```c++ class CustomAddKernel : public kernel::Kernel { diff --git a/docs/lite/docs/source_en/advanced/third_party/tensorrt_info.md b/docs/lite/docs/source_en/advanced/third_party/tensorrt_info.md index 053d62a6c5..b8a93463be 100644 --- a/docs/lite/docs/source_en/advanced/third_party/tensorrt_info.md +++ b/docs/lite/docs/source_en/advanced/third_party/tensorrt_info.md @@ -1,12 +1,12 @@ # TensorRT Integration Information -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/docs/source_en/advanced/third_party/tensorrt_info.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/docs/source_en/advanced/third_party/tensorrt_info.md) ## Steps ### Environment Preparation -Besides basic [Environment Preparation](https://www.mindspore.cn/lite/docs/en/r2.7.1/build/build.html), CUDA and TensorRT are required as well. Current version supports [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base) and [TensorRT 6.0.1.5](https://developer.nvidia.com/nvidia-tensorrt-6x-download), and [CUDA 11.1](https://developer.nvidia.com/cuda-11.1.1-download-archive) and [TensorRT 8.5.1](https://developer.nvidia.com/nvidia-tensorrt-8x-download). +Besides basic [Environment Preparation](https://www.mindspore.cn/lite/docs/en/r2.7.2/build/build.html), CUDA and TensorRT are required as well. Current version supports [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base) and [TensorRT 6.0.1.5](https://developer.nvidia.com/nvidia-tensorrt-6x-download), and [CUDA 11.1](https://developer.nvidia.com/cuda-11.1.1-download-archive) and [TensorRT 8.5.1](https://developer.nvidia.com/nvidia-tensorrt-8x-download). Install the appropriate version of CUDA and set the installed directory as environment variable `${CUDA_HOME}`. Our build script uses this environment variable to seek CUDA. @@ -20,17 +20,17 @@ In the Linux environment, use the build.sh script in the root directory of MindS bash build.sh -I x86_64 ``` -For more information about compilation, see [Linux Environment Compilation](https://www.mindspore.cn/lite/docs/en/r2.7.1/build/build.html#linux-environment-compilation). +For more information about compilation, see [Linux Environment Compilation](https://www.mindspore.cn/lite/docs/en/r2.7.2/build/build.html#linux-environment-compilation). ### Integration - Integration instructions When developers need to integrate the use of TensorRT features, it is important to note: - - [Configure the TensorRT backend](https://www.mindspore.cn/lite/docs/en/r2.7.1/infer/runtime_cpp.html#configuring-the-gpu-backend) in the code. - For more information about using Runtime to perform inference, see [Using Runtime to Perform Inference (C++)](https://www.mindspore.cn/lite/docs/en/r2.7.1/infer/runtime_cpp.html). + - [Configure the TensorRT backend](https://www.mindspore.cn/lite/docs/en/r2.7.2/infer/runtime_cpp.html#configuring-the-gpu-backend) in the code. + For more information about using Runtime to perform inference, see [Using Runtime to Perform Inference (C++)](https://www.mindspore.cn/lite/docs/en/r2.7.2/infer/runtime_cpp.html). - - Compile and execute the binary. If you use dynamic linking, please refer to [Compilation Output](https://www.mindspore.cn/lite/docs/en/r2.7.1/build/build.html#directory-structure) with compilation option `-I x86_64`. + - Compile and execute the binary. If you use dynamic linking, please refer to [Compilation Output](https://www.mindspore.cn/lite/docs/en/r2.7.2/build/build.html#directory-structure) with compilation option `-I x86_64`. Please set environment variables to dynamically link related libs. ```bash @@ -41,7 +41,7 @@ For more information about compilation, see [Linux Environment Compilation](http - Using Benchmark testing TensorRT inference - Users can also test TensorRT inference using MindSpore Lite Benchmark tool. The location of the compiled Benchmark is shown in [Compiled Output](https://www.mindspore.cn/lite/docs/en/r2.7.1/build/build.html). Pass the build package to a device with a TensorRT environment (TensorRT 6.0.1.5) and use the Benchmark tool to test TensorRT inference. Examples are as follows: + Users can also test TensorRT inference using MindSpore Lite Benchmark tool. The location of the compiled Benchmark is shown in [Compiled Output](https://www.mindspore.cn/lite/docs/en/r2.7.2/build/build.html). Pass the build package to a device with a TensorRT environment (TensorRT 6.0.1.5) and use the Benchmark tool to test TensorRT inference. Examples are as follows: - Test performance @@ -55,14 +55,14 @@ For more information about compilation, see [Linux Environment Compilation](http ./benchmark --device=GPU --modelFile=./models/test_benchmark.ms --inDataFile=./input/test_benchmark.bin --inputShapes=1,32,32,1 --accuracyThreshold=3 --benchmarkDataFile=./output/test_benchmark.out ``` - For more information about the use of Benchmark, see [Benchmark Use](https://www.mindspore.cn/lite/docs/en/r2.7.1/tools/benchmark.html). + For more information about the use of Benchmark, see [Benchmark Use](https://www.mindspore.cn/lite/docs/en/r2.7.2/tools/benchmark.html). For environment variable settings, you need to set the directory where the `libmindspore-lite.so` (under the directory `mindspore-lite-{version}-{os}-{arch}/runtime/lib`), TensorRT and CUDA `so` libraries are located, to `${LD_LIBRARY_PATH}`. - Using TensorRT engine serialization - TensorRT backend inference supports serializing the built TensorRT model (Engine) into a binary file and saves it locally. When it is used the next time, the model can be deserialized and loaded from the local, avoiding rebuilding and reducing overhead. To support this function, users need to use the [LoadConfig](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Model.html) interface to load the configuration file in the code, you need to specify the saving path of serialization file in the configuration file: + TensorRT backend inference supports serializing the built TensorRT model (Engine) into a binary file and saves it locally. When it is used the next time, the model can be deserialized and loaded from the local, avoiding rebuilding and reducing overhead. To support this function, users need to use the [LoadConfig](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Model.html) interface to load the configuration file in the code, you need to specify the saving path of serialization file in the configuration file: ``` [ms_cache] @@ -73,7 +73,7 @@ For more information about compilation, see [Linux Environment Compilation](http By default, TensorRT optimizes the model based on the input shapes (batch size, image size, and so on) at which it was defined. However, the input dimension can be adjusted at runtime by configuring the profile. In the profile, the minimum, dynamic and optimal shape of each input can be set. - TensorRT creates an optimized engine for each profile, choosing CUDA kernels that work for all shapes within the [minimum ~ maximum] range. And in the profile, multiple input dimensions can be configured for a single input. To support this function, users need to use the [LoadConfig](https://www.mindspore.cn/lite/api/en/r2.7.1/generate/classmindspore_Model.html) interface to load the configuration file in the code. + TensorRT creates an optimized engine for each profile, choosing CUDA kernels that work for all shapes within the [minimum ~ maximum] range. And in the profile, multiple input dimensions can be configured for a single input. To support this function, users need to use the [LoadConfig](https://www.mindspore.cn/lite/api/en/r2.7.2/generate/classmindspore_Model.html) interface to load the configuration file in the code. If min, opt, and Max are the minimum, optimal, and maximum dimensions, and real_shape is the shape of the input tensor, the following conditions must hold: @@ -102,4 +102,4 @@ For more information about compilation, see [Linux Environment Compilation](http ## Supported Operators -For supported TensorRT operators, see [Lite Operator List](https://www.mindspore.cn/lite/docs/en/r2.7.1/reference/operator_list_lite.html). +For supported TensorRT operators, see [Lite Operator List](https://www.mindspore.cn/lite/docs/en/r2.7.2/reference/operator_list_lite.html). diff --git a/docs/lite/docs/source_en/build/build.md b/docs/lite/docs/source_en/build/build.md index e345689f77..c1fcf70ba2 100644 --- a/docs/lite/docs/source_en/build/build.md +++ b/docs/lite/docs/source_en/build/build.md @@ -1,6 +1,6 @@ # Building Device-side -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/docs/source_en/build/build.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/docs/source_en/build/build.md) This chapter introduces how to quickly compile MindSpore Lite, which includes the following modules: @@ -93,7 +93,7 @@ The construction of modules is controlled by environment variables. Users can co | MSLITE_ENABLE_MODEL_PRE_INFERENCE | Whether to enable pre-inference during model compilation | on, off | off | | MSLITE_ENABLE_GITEE_MIRROR | Whether to enable download third_party from gitee mirror | on, off | off | - > - For TensorRT and Kirin NPU compilation environment configuration, refer to [Application Specific Integrated Circuit Integration Instructions](https://www.mindspore.cn/lite/docs/en/r2.7.1/advanced/third_party/asic.html). + > - For TensorRT and Kirin NPU compilation environment configuration, refer to [Application Specific Integrated Circuit Integration Instructions](https://www.mindspore.cn/lite/docs/en/r2.7.2/advanced/third_party/asic.html). > - When the AVX instruction set is enabled, the CPU of the running environment needs to support both AVX and FMA features. > - The compilation time of the model conversion tool is long. If it is not necessary, it is recommended to use `MSLITE_ENABLE_CONVERTER` to turn off the compilation of the conversion tool to speed up the compilation. > - The version supported by the OpenSSL encryption library is 1.1.1k, which needs to be downloaded and compiled by the user. For the compilation, please refer to: . In addition, the path of libcrypto.so.1.1 should be added to LD_LIBRARY_PATH. @@ -102,7 +102,7 @@ The construction of modules is controlled by environment variables. Users can co - Runtime feature compilation options - If the user is sensitive to the package size of the framework, the following options can be configured to reduce the package size by reducing the function of the runtime model reasoning framework. Then, the user can further reduce the package size by operator reduction through the [cropper tool](https://www.mindspore.cn/lite/docs/en/r2.7.1/tools/cropper_tool.html). + If the user is sensitive to the package size of the framework, the following options can be configured to reduce the package size by reducing the function of the runtime model reasoning framework. Then, the user can further reduce the package size by operator reduction through the [cropper tool](https://www.mindspore.cn/lite/docs/en/r2.7.2/tools/cropper_tool.html). | Option | Parameter Description | Value Range | Defaults | | -------- | ----- | ---- | ---- | @@ -121,7 +121,7 @@ The construction of modules is controlled by environment variables. Users can co First, download source code from the MindSpore Lite code repository. ```bash -git clone -b r2.7.1 https://gitee.com/mindspore/mindspore-lite.git +git clone -b r2.7.2 https://gitee.com/mindspore/mindspore-lite.git ``` Then, run the following commands in the root directory of the source code to compile MindSpore Lite of different versions: @@ -318,7 +318,7 @@ The script `build.bat` in the root directory of MindSpore Lite can be used to co First, use the git tool to download the source code from the MindSpore Lite code repository. ```bat -git clone -b r2.7.1 https://gitee.com/mindspore/mindspore-lite.git +git clone -b r2.7.2 https://gitee.com/mindspore/mindspore-lite.git ``` Then, use the cmd tool to compile MindSpore Lite in the root directory of the source code and execute the following commands. @@ -411,7 +411,7 @@ The script `build.sh` in the root directory of MindSpore Lite can be used to com First, use the git tool to download the source code from the MindSpore Lite code repository. ```bash -git clone -b r2.7.1 https://gitee.com/mindspore/mindspore-lite.git +git clone -b r2.7.2 https://gitee.com/mindspore/mindspore-lite.git ``` Then, use the cmd tool to compile MindSpore Lite in the root directory of the source code and execute the following commands. diff --git a/docs/lite/docs/source_en/converter/converter_tool.md b/docs/lite/docs/source_en/converter/converter_tool.md index 3536caa479..36c26afb6e 100644 --- a/docs/lite/docs/source_en/converter/converter_tool.md +++ b/docs/lite/docs/source_en/converter/converter_tool.md @@ -1,6 +1,6 @@ # Device-side Models Conversion -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.1/docs/lite/docs/source_en/converter/converter_tool.md) +[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.2/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.2/docs/lite/docs/source_en/converter/converter_tool.md) ## Overview @@ -16,7 +16,7 @@ The `ms` model converted by the conversion tool supports the conversion tool and To use the MindSpore Lite model conversion tool, you need to prepare the environment as follows: -- [Compile](https://www.mindspore.cn/lite/docs/en/r2.7.1/build/build.html) or [download](https://www.mindspore.cn/lite/docs/en/r2.7.1/use/downloads.html) model transfer tool. +- [Compile](https://www.mindspore.cn/lite/docs/en/r2.7.2/build/build.html) or [download](https://www.mindspore.cn/lite/docs/en/r2.7.2/use/downloads.html) model transfer tool. - Add the path of dynamic library required by the conversion tool to the environment variables LD_LIBRARY_PATH. @@ -85,9 +85,9 @@ The following describes the parameters in detail. > - The Caffe model is divided into two files: model structure `*.prototxt`, corresponding to the `--modelFile` parameter; model weight `*.caffemodel`, corresponding to the `--weightFile` parameter. > - The priority of `--fp16` option is very low. For example, if quantization is enabled, `--fp16` will no longer take effect on const tensors that have been quantized. All in all, this option only takes effect on const tensors of float32 when serializing model. > - `inputDataFormat`: generally, in the scenario of integrating third-party hardware of NCHW specification, designated as NCHW will have a significant performance improvement over NHWC. In other scenarios, users can also set as needed. -> - The `configFile` configuration files uses the `key=value` mode to define related parameters. For the configuration parameters related to quantization, please refer to [quantization](https://www.mindspore.cn/lite/docs/en/r2.7.1/advanced/quantization.html). For the configuration parameters related to extension, please refer to [Extension Configuration](https://www.mindspore.cn/lite/docs/en/r2.7.1/advanced/third_party/converter_register.html#extension-configuration). +> - The `configFile` configuration files uses the `key=value` mode to define related parameters. For the configuration parameters related to quantization, please refer to [quantization](https://www.mindspore.cn/lite/docs/en/r2.7.2/advanced/quantization.html). For the configuration parameters related to extension, please refer to [Extension Configuration](https://www.mindspore.cn/lite/docs/en/r2.7.2/advanced/third_party/converter_register.html#extension-configuration). > - `--optimize` parameter is used to set the mode of optimization during the offline conversion. If this parameter is set to none, no relevant graph optimization operations will be performed during the offline conversion phase of the model, and the relevant graph optimization operations will be done during the execution of the inference phase. The advantage of this parameter is that the converted model can be deployed directly to any CPU/GPU/Ascend hardware backend since it is not optimized in a specific way, while the disadvantage is that the initialization time of the model increases during inference execution. If this parameter is set to general, general optimization will be performed, such as constant folding and operator fusion (the converted model only supports CPU/GPU hardware backend, not Ascend backend). If this parameter is set to gpu_oriented, the general optimization and extra optimization for GPU hardware will be performed (the converted model only supports GPU hardware backend). If this parameter is set to ascend_oriented, the optimization for Ascend hardware will be performed (the converted model only supports Ascend hardware backend). -> - The encryption and decryption function only takes effect when `MSLITE_ENABLE_MODEL_ENCRYPTION=on` is set at [compile](https://www.mindspore.cn/lite/docs/en/r2.7.1/build/build.html) time and only supports Linux x86 platforms, and the key is a string represented by hexadecimal. Users on the Linux platform can use the `xxd` tool to convert the key represented by the bytes to a hexadecimal representation. It should be noted that the encryption and decryption algorithm has been updated in version 1.7. As a result, the new version of the converter tool does not support the conversion of the encrypted model exported by MindSpore Lite in version 1.6 and earlier. +> - The encryption and decryption function only takes effect when `MSLITE_ENABLE_MODEL_ENCRYPTION=on` is set at [compile](https://www.mindspore.cn/lite/docs/en/r2.7.2/build/build.html) time and only supports Linux x86 platforms, and the key is a string represented by hexadecimal. Users on the Linux platform can use the `xxd` tool to convert the key represented by the bytes to a hexadecimal representation. It should be noted that the encryption and decryption algorithm has been updated in version 1.7. As a result, the new version of the converter tool does not support the conversion of the encrypted model exported by MindSpore Lite in version 1.6 and earlier. > - Parameters `--input_shape` and dynamicDims are stored in the model during conversion. Call model.get_model_info("input_shape") and model.get_model_info("dynamic_dims") to get it when using the model. ### CPU Model Optimization @@ -179,7 +179,7 @@ To use the MindSpore Lite model conversion tool, the following environment prepa - The Windows conversion tool is compiled based on mingw-64 and depends on related dynamic libraries. Therefore, [mingw-w64](https://www.mingw-w64.org/downloads/) must be installed. -- [Compile](https://www.mindspore.cn/lite/docs/en/r2.7.1/build/build.html) or [download](https://www.mindspore.cn/lite/docs/en/r2.7.1/use/downloads.html) model transfer tool. +- [Compile](https://www.mindspore.cn/lite/docs/en/r2.7.2/build/build.html) or [download](https://www.mindspore.cn/lite/docs/en/r2.7.2/use/downloads.html) model transfer tool. - Add the path of dynamic library required by the conversion tool to the environment variables PATH. @@ -209,7 +209,7 @@ mindspore-lite-{version}-win-x64 ### Parameter Description -Refer to the Linux environment model conversion tool [parameter description](https://www.mindspore.cn/lite/docs/en/r2.7.1/converter/converter_tool.html#parameter-description). +Refer to the Linux environment model conversion tool [parameter description](https://www.mindspore.cn/lite/docs/en/r2.7.2/converter/converter_tool.html#parameter-description). ### Example diff --git a/docs/lite/docs/source_en/index.rst b/docs/lite/docs/source_en/index.rst index 57a2a705fd..5b9321c205 100644 --- a/docs/lite/docs/source_en/index.rst +++ b/docs/lite/docs/source_en/index.rst @@ -216,7 +216,7 @@ MindSpore Lite Documentation