diff --git a/tutorials/inference/source_en/multi_platform_inference.md b/tutorials/inference/source_en/multi_platform_inference.md
index 2879428aaf758850a2ce2d535d0e0bcb5b8ec170..78cbc5d9ebddd6eae844d92b60151d001e053be3 100644
--- a/tutorials/inference/source_en/multi_platform_inference.md
+++ b/tutorials/inference/source_en/multi_platform_inference.md
@@ -10,26 +10,99 @@
-Models trained by MindSpore support the inference on different hardware platforms. This document describes the inference process on each platform.
+Models trained by MindSpore support the inference on different hardware platforms. This document describes the inference
+process on each platform.
-The inference can be performed in either of the following methods based on different principles:
+## Models File
-- Use a checkpoint file for inference, that is, use the inference API to load data and the checkpoint file for inference in the MindSpore training environment.
-- Convert the checkpoint file into a common model format, such as ONNX or AIR, for inference. The inference environment does not depend on MindSpore. In this way, inference can be performed across hardware platforms as long as the platform supports ONNX or AIR inference. For example, models trained on the Ascend 910 AI processor can be inferred on the GPU or CPU.
+MindSpore supports saving training parameters and network models (including parameter information).
-MindSpore supports the following inference scenarios based on the hardware platform:
+- training parameter refer to the Checkpoint format files.
+- network model including MindIR, AIR and ONNX files.
-| Hardware Platform | Model File Format | Description |
-| ----------------------- | ----------------- | ---------------------------------------- |
-| Ascend 910 AI processor | Checkpoint | The training environment dependency is the same as that of MindSpore. |
-| Ascend 310 AI processor | ONNX or AIR | Equipped with the ACL framework and supports the model in OM format. You need to use a tool to convert a model into the OM format. |
-| GPU | Checkpoint | The training environment dependency is the same as that of MindSpore. |
-| GPU | ONNX | Supports ONNX Runtime or SDK, for example, TensorRT. |
-| CPU | Checkpoint | The training environment dependency is the same as that of MindSpore. |
-| CPU | ONNX | Supports ONNX Runtime or SDK, for example, TensorRT. |
+The follow describes the basic concepts and application scenarios of these formats.
-> - Open Neural Network Exchange (ONNX) is an open file format designed for machine learning. It is used to store trained models. It enables different AI frameworks (such as PyTorch and MXNet) to store model data in the same format and interact with each other. For details, visit the ONNX official website .
-> - Ascend Intermediate Representation (AIR) is an open file format defined by Huawei for machine learning and can better adapt to the Ascend AI processor. It is similar to ONNX.
-> - Ascend Computer Language (ACL) provides C++ API libraries for users to develop deep neural network applications, including device management, context management, stream management, memory management, model loading and execution, operator loading and execution, and media data processing. It matches the Ascend AI processor and enables hardware running management and resource management.
-> - Offline Model (OM) is supported by the Huawei Ascend AI processor. It implements preprocessing functions that can be completed without devices, such as operator scheduling optimization, weight data rearrangement and compression, and memory usage optimization.
-> - NVIDIA TensorRT is an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime to improve the inference speed of the deep learning model on edge devices. For details, see .
+- Checkpoint
+ - Using the Protocol Buffers format, all parameter values in the network are stored.
+ - Generally used for resuming training after the training task is interrupted, or fine tuning (Fine Tune) task after training.
+
+- MindIR
+ - MindIR stands for MindSpore IR, it's a functional IR of MindSpore based on graph representation, which defines an extensible graph structure and IR representation of operator*.
+ - MindIR eliminates model differences between the different backends, generally used to cross-hardware platform inference.
+
+- ONNX
+ - ONNX stands for Open Neural Network Exchange, it's a General expression of Machine Learning Models.
+ - Generally used to model migration between different frameworks or used on the inference engine (TensorRT).
+
+- AIR
+ - The full name Ascend Intermediate Representation is an open file format designed for machine learning defined by Huawei.
+ - AIR is adapted to Huawei's AI processors and is typically used to inference tasks on the Ascend 310.
+
+## Execute inference
+
+The inference can be divided into the following two methods based on different application environments.
+
+### 1.Local Inference
+
+Loading the Checkpoint files generated by network training, and calling the `Model.predict` interface for inference. See
+the [Inference on the Ascend 910 AI processor](Inference on the Ascend 910 AI processor) for details.
+
+
+
+### 2.Cross-platform Inference
+
+Using the network definitions and Checkpoint files, `export` interface is called to export model files, and inference is performed on different platforms.
+Currently, MINDIR, ONNX and AIR (Ascend AI processor only) models can be exported, See the [Saving model](saving model) for details.
+
+
+
+## MindIR introduction
+
+MindSpore defines the logical structure of the network and the attributes of the operators through unified IR, decouples the model file in MindIR format from the hardware platform, and realizes one training and multiple deployments.
+
+### 1.Basic introduction
+
+MindIR as a unified model file format for MindSpore, also stores the network structure and weight parameter values. At the same time, it supports deployment to cloud Serving and Device-side Lite platforms to perform inference tasks.
+The same MindIR file supports the deployment of multiple hardware forms:
+
+- Cloud Serving deployment inference: After MindSpore trains and generates MindIR model files, it can be directly sent to MindSpore Serving to load and perform inference tasks without additional model conversion, so that the model of Ascend, GPU, CPU and other hardware can be unified.
+
+- Device-side Lite deployment inference: MindIR is available directly for Lite deployments, it provides model miniaturization and conversion functions to adapted the Device-side lightweight requirements. Due to the needs of Device-side lightweight, it provides model miniaturization and conversion functions, supports the conversion of original MindIR model files from Protocol Buffers format to FlatBuffers format for storage, and lightweight network structure to better meet Device-side performance, memory, etc.
+
+### 2.Application scenarios
+
+First use the network definition and Checkpoint files to export the MindIR model file, and then perform reasoning tasks according to different needs, such as performing reasoning tasks on Ascend 310, deploying inferencing services based on MindSpore Serving, and Device-side inferencing.
+
+## MindIR supported networks list
+
+|networks|
+| ------------- |
+|AlexNet|
+|BERT|
+|BGCF|
+|CenterFace|
+|CNN&CTC|
+|DeepLabV3|
+|DenseNet121|
+|Faster R-CNN|
+|GAT|
+|GCN|
+|GoogLeNet|
+|LeNet|
+|Mask R-CNN|
+|MASS|
+|MobileNetV2|
+|NCF|
+|PSENet|
+|ResNet|
+|ResNeXt|
+|InceptionV3|
+|SqueezeNet|
+|SSD|
+|Transformer|
+|TinyBert|
+|UNet2D|
+|VGG16|
+|Wide&Deep|
+|YOLOv3|
+|YOLOv4|