)
+ 24 }
+ ```
+
+The above contents can be divided into two parts, the first part is the input list and the second part is the diagram structure. The first line tells us the name of the top graph about the network, `@6_5_1_construct_wrapper.15`, or the entry graph. Line 4 tells us how many inputs are in the network. Line 6 to 7 are the input list, which is in the format of `%para[No.]_[name] : <[data_type]x[shape]>`. Line 9 tells us the number of subgraphs parsed by the network. Line 11 to 24 indicate the graph structure, which contains several nodes, namely, `CNode`. In this example, there are only two nodes: `Add` in row 14 and `Mul` in row 18.
+
+The `CNode` information format is as follows: including the node name, attribute, input node, output information, format, and source code parsing call stack. The ANF diagram is a unidirectional acyclic graph. So, the connection between nodes is displayed only based on the input relationshape. The source code parsing call stack reflects the relationship between the `CNode` and the script source code. For example, line 20 is parsed from line 21, and line 21 corresponds to `x = x * y` of the script.
+
+```text
+ %[No.]([debug_name]) = [OpName]([arg], ...) primitive_attrs: {[key]: [value], ...}
+ : (<[input data_type]x[input shape]>, ...) -> (<[output data_type]x[output shape]>, ...)
+ # Call stack for source code parsing
+```
+
+>Notice:
+>After several optimizations by the compiler, the node may undergo several changes (such as operator splitting and operator merging). The source code parsing call stack information of the node may not be in a one-to-one correspondence with the script. This is only an auxiliary method.
+
+## Dump Required Data from the IR File
+
+The following code comes from `lenet.py` in ModelZoo LeNet sample, Assume that you want to dump the first convolutional layer, that is, the `x = self.conv1(x)` data in the following code.
+
+```python
+class LeNet5(nn.Cell):
+ def __init__(self, num_class=10, num_channel=1, include_top=True):
+ super(LeNet5, self).__init__()
+ self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid')
+ self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid')
+ self.relu = nn.ReLU()
+ self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
+ self.include_top = include_top
+ if self.include_top:
+ self.flatten = nn.Flatten()
+ self.fc1 = nn.Dense(16 * 5 * 5, 120, weight_init=Normal(0.02))
+ self.fc2 = nn.Dense(120, 84, weight_init=Normal(0.02))
+ self.fc3 = nn.Dense(84, num_class, weight_init=Normal(0.02))
+
+
+ def construct(self, x):
+ x = self.conv1(x)
+ x = self.relu(x)
+ x = self.max_pool2d(x)
+ x = self.conv2(x)
+ x = self.relu(x)
+ x = self.max_pool2d(x)
+ if not self.include_top:
+ return x
+ x = self.flatten(x)
+ x = self.relu(self.fc1(x))
+ x = self.relu(self.fc2(x))
+ x = self.fc3(x)
+ return x
+```
+
+Generally, graph 0, `hwopt_d_end_graph_0_[xxxx].ir`, indicates the data subgraph (if the dataset_sink_mode is enabled), and graph 1 indicates the backbone network. So, search for `x = self.conv1(x)` in the dumped `hwopt_d_end_graph_1_[xxxx].ir`file, 4 results are obtained, 3 of them are `Cast` and `TransData`, skipping these operators generated by the precision conversion and format conversion optimization, we finally locate in lines 213 to 221, that is, `%24(equivoutput) = Conv2D(%23, %19)...`, corresponding to `conv1` in the network. In this way, you can obtain the op name (in the brackets of line 216, `Default/network-TrainOneStepWithLossScaleCell/network-WithLossCell/_backbone-LeNet5/conv1-Conv2d/Conv2D-op89`) in the compiled diagram from the following information.
+
+```text
+...
+ 213 %24(equivoutput) = Conv2D(%23, %19) {instance name: conv2d} primitive_attrs: {pri_format: NC1HWC0, stride: (1, 1, 1, 1), pad: (0, 0, 0, 0), pad_mode: valid, out_channel: 6, mode: 1 , dilation: (1, 1, 1, 1), output_names: [output], group: 1, format: NCHW, visited: true, offset_a: 0, kernel_size: (5, 5), groups: 1, input_names: [x, w], pad_list: (0, 0, 0, 0), IsF eatureMapOutput: true, IsFeatureMapInputList: (0)}
+ 214 : (, ) -> ()
+ 215 : (, ) -> ()
+ 216 : (Default/network-TrainOneStepWithLossScaleCell/network-WithLossCell/_backbone-LeNet5/conv1-Conv2d/Conv2D-op89)
+ 217 # In file /home/workspace/mindspore/build/package/mindspore/nn/layer/conv.py(263)/ output = self.conv2d(x, self.weight)/
+ 218 # In file /home/workspace/mindspore/model_zoo/official/cv/lenet/src/lenet.py(49)/ x = self.conv1(x)/
+ 219 # In file /home/workspace/mindspore/build/package/mindspore/train/amp.py(101)/ out = self._backbone(data)/
+ 220 # In file /home/workspace/mindspore/build/package/mindspore/nn/wrap/loss_scale.py(323)/ grads = self.grad(self.network, weights)(*inputs, scaling_sens_filled)/
+ 221 # In file /home/workspace/mindspore/build/package/mindspore/train/dataset_helper.py(87)/ return self.network(*outputs)/
+...
+```
+
+After obtaining the op name, we can execute the dump process to save the input and output of the operator for debugging. Here, we will introduce a method called synchronous dump.
+
+1.Create the configuration file `data_dump.json`, this file stores the operators information to be dumped, copy the op name obtained from previous step to the `kernels` key. For details about this file, see the [custom debugging info](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/custom_debugging_info.html#id5).
+
+```text
+{
+ "common_dump_settings": {
+ "dump_mode": 1,
+ "path": "/absolute_path",
+ "net_name": "LeNet",
+ "iteration": 0,
+ "input_output": 0,
+ "kernels": ["Default/network-TrainOneStepWithLossScaleCell/network-WithLossCell/_backbone-LeNet5/conv1-Conv2d/Conv2D-op89"],
+ "support_device": [0,1,2,3,4,5,6,7]
+ },
+ "e2e_dump_settings": {
+ "enable": true,
+ "trans_flag": false
+ }
+}
+```
+
+2.Configure environment variables and specify the path of the configuration file.
+
+```bash
+export MINDSPORE_DUMP_CONFIG={Absolute path of data_dump.json}
+```
+
+3.Execute the case to dump data. During the execution, Mindspore dumps the input and output data of a specified operator to the specified path.
+
+In this example, the following files are obtained, which correspond to the input and output of the operator.
+
+```text.
+├── Default--network-TrainOneStepWithLossScaleCell--network-WithLossCell--_backbone-LeNet5--conv1-Conv2d--Conv2D-op89_input_0_shape_32_1_32_32_16_Float16_NC1HWC0.bin
+├── Default--network-TrainOneStepWithLossScaleCell--network-WithLossCell--_backbone-LeNet5--conv1-Conv2d--Conv2D-op89_input_1_shape_25_1_16_16_Float16_FracZ.bin
+└── Default--network-TrainOneStepWithLossScaleCell--network-WithLossCell--_backbone-LeNet5--conv1-Conv2d--Conv2D-op89_output_0_shape_32_1_28_28_16_Float16_NC1HWC0.bin
+```
+
+4.Parse the dump data.
+
+You can use numpy.fromfile to read the file generated in previous step. The ndarray obtained after reading is the input/output of the corresponding operator.
+
+```python
+import numpy
+output = numpy.fromfile("Default--network-TrainOneStepWithLossScaleCell--network-WithLossCell--_backbone-LeNet5--conv1-Conv2d--Conv2D-op89_input_0_shape_32_1_32_32_16_Float16_NC1HWC0.bin")
+print(output)
+```
+
+The output is:
+
+```text
+[1.17707155e-17 4.07526143e-17 5.84038559e-18 ... 0.00000000e+00 0.00000000e+00 0.00000000e+00]
+```
+