diff --git a/tutorials/experts/source_en/index.rst b/tutorials/experts/source_en/index.rst
index 038314fdf4c49736a830e2c3bfcf01e29784fbee..57931ba36036389d72db26fd67be31ee56ee50ba 100644
--- a/tutorials/experts/source_en/index.rst
+++ b/tutorials/experts/source_en/index.rst
@@ -22,7 +22,6 @@ For Experts
network/control_flow
network/op_overload
- network/custom_cell_reverse
network/jit_class
network/constexpr
network/dependency_control
diff --git a/tutorials/experts/source_en/network/custom_cell_reverse.md b/tutorials/experts/source_en/network/custom_cell_reverse.md
deleted file mode 100644
index 36881e4a4cc327299db76866ed33405cb7d32c08..0000000000000000000000000000000000000000
--- a/tutorials/experts/source_en/network/custom_cell_reverse.md
+++ /dev/null
@@ -1,158 +0,0 @@
-# Customizing Reverse Propagation Function of Cell
-
-
-
-When MindSpore is used to build a neural network, the `nn.Cell` class needs to be inherited. We might have the following problems when we construct networks:
-
-1. There are operations or operators in Cell that are not derivable or for which reverse propagation rules are not yet defined.
-
-2. When replacing certain forward calculation procedures of Cell, you need to customize the corresponding reverse propagation function.
-
-Then we can use the function of customizing the backward propagation function of the Cell object. The format is as follows:
-
-```python
-def bprop(self, ..., out, dout):
- return ...
-```
-
-- Input parameters: Input parameters in the forward propagation plus `out` and `dout`. `out` indicates the computation result of the forward propagation, and `dout` indicates the gradient returned to the `nn.Cell` object.
-- Return values: Gradient of each input in the forward propagation. The number of return values must be the same as the number of inputs in the forward propagation.
-
-A complete simple example is as follows:
-
-```python
-import mindspore.nn as nn
-import mindspore as ms
-import mindspore.ops as ops
-
-class Net(nn.Cell):
- def __init__(self):
- super(Net, self).__init__()
- self.matmul = ops.MatMul()
-
- def construct(self, x, y):
- out = self.matmul(x, y)
- return out
-
- def bprop(self, x, y, out, dout):
- dx = x + 1
- dy = y + 1
- return dx, dy
-
-
-class GradNet(nn.Cell):
- def __init__(self, net):
- super(GradNet, self).__init__()
- self.net = net
-
- def construct(self, x, y):
- gradient_function = ms.grad(self.net, grad_position=(0, 1))
- return gradient_function(x, y)
-
-
-x = ms.Tensor([[0.5, 0.6, 0.4], [1.2, 1.3, 1.1]], dtype=ms.float32)
-y = ms.Tensor([[0.01, 0.3, 1.1], [0.1, 0.2, 1.3], [2.1, 1.2, 3.3]], dtype=ms.float32)
-out = GradNet(Net())(x, y)
-print(out)
-```
-
-```text
- (Tensor(shape=[2, 3], dtype=Float32, value=
- [[ 1.50000000e+00, 1.60000002e+00, 1.39999998e+00],
- [ 2.20000005e+00, 2.29999995e+00, 2.09999990e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
- [[ 1.00999999e+00, 1.29999995e+00, 2.09999990e+00],
- [ 1.10000002e+00, 1.20000005e+00, 2.29999995e+00],
- [ 3.09999990e+00, 2.20000005e+00, 4.30000019e+00]]))
-```
-
-This example customizes the gradient calculation process for the `MatMul` operation by defining `bprop` function of Cell, where `dx` is the derivative of the input `x`, `dy` is the derivative of the input `y`, `out` is the result of the `MatMul` calculation, and `dout` is the gradient passed back to `Net`.
-
-## Application example
-
-1. There are some operators which is non-differentiable or has not been defined the back propagation function in the Cell. For example, the operator `ReLU6` has not been defined its second-order back propagation rule, which can be defined by customizing the `bprop` function of Cell. The code is as follow:
-
- ```python
- import mindspore.nn as nn
- from mindspore import Tensor
- from mindspore import dtype as mstype
- import mindspore.ops as ops
-
-
- class ReluNet(nn.Cell):
- def __init__(self):
- super(ReluNet, self).__init__()
- self.relu = ops.ReLU()
-
- def construct(self, x):
- return self.relu(x)
-
-
- class Net(nn.Cell):
- def __init__(self):
- super(Net, self).__init__()
- self.relu6 = ops.ReLU6()
- self.relu = ReluNet()
-
- def construct(self, x):
- return self.relu6(x)
-
- def bprop(self, x, out, dout):
- dx = self.relu(x)
- return (dx, )
-
-
- x = Tensor([[0.5, 0.6, 0.4], [1.2, 1.3, 1.1]], dtype=mstype.float32)
- net = Net()
- out = ops.grad(ops.grad(net))(x)
- print(out)
- ```
-
- ```text
- [[1. 1. 1.]
- [1. 1. 1.]]
- ```
-
- The above code defines the first-order back propagation rule by customizing the `bprop` function of `Net` and gets the second-order back propagation rule by the back propagation rule of `self.relu` in the `bprop`.
-
-2. We need the customized back propagation function when we want to replace some forward calculate process of the Cell. For example, there is following code in the network SNN:
-
- ```python
- class relusigmoid(nn.Cell):
- def __init__(self):
- super().__init__()
- self.sigmoid = ops.Sigmoid()
- self.greater = ops.Greater()
-
- def construct(self, x):
- spike = self.greater(x, 0)
- return spike.astype(mindspore.float32)
-
- def bprop(self, x, out, dout):
- sgax = self.sigmoid(x * 5.0)
- grad_x = dout * (1 - sgax) * sgax * 5.0
- return (grad_x,)
-
- class IFNode(nn.Cell):
- def __init__(self, v_threshold=1.0, fire=True, surrogate_function=relusigmoid()):
- super().__init__()
- self.v_threshold = v_threshold
- self.fire = fire
- self.surrogate_function = surrogate_function
-
- def construct(self, x, v):
- v = v + x
- if self.fire:
- spike = self.surrogate_function(v - self.v_threshold) * self.v_threshold
- v -= spike
- return spike, v
- return v, v
- ```
-
- The above code replaces the origin sigmoid activation function in the sub-network `IFNode` with a customized activation function relusigmoid, and then we should customize the new back propagation function for the new activation function.
-
-## Constraints
-
-- If the number of return values of the `bprop` function is 1, the return value must be written in the tuple format, that is, `return (dx,)`.
-- In graph mode, the `bprop` function needs to be converted into a graph IR. Therefore, the static graph syntax must be complied with. For details, see [Static Graph Syntax Support](https://www.mindspore.cn/docs/en/master/note/static_graph_syntax_support.html).
-- Only support returning the gradient of the forward propagation input, not the gradient of the `Parameter`.
-- The use of `Parameter` is not supported in `bprop`.
diff --git a/tutorials/source_en/advanced/modules.rst b/tutorials/source_en/advanced/modules.rst
index 9a583873609e547bc2f0e7a929773ba92dfd4f3f..96117a525a2fb8a516d56b58b8579807b62ed35c 100644
--- a/tutorials/source_en/advanced/modules.rst
+++ b/tutorials/source_en/advanced/modules.rst
@@ -1,9 +1,121 @@
-Module Customization
-=====================
+Model Module Customization
+===========================
.. toctree::
:maxdepth: 1
- modules/parameter
+ modules/layer
+ modules/initializer
modules/loss
- modules/optim
+ modules/optimizer
+
+Basic Usage Examples
+--------------------
+
+The neural network model is composed of various layers. MindSpore
+provides Cell, the base unit for constructing neural network layers, and
+performs neural network encapsulation based on Cell. In the following,
+the classical model AlexNet is constructed by using Cell.
+
+.. figure:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/tutorials/source_zh_cn/advanced/modules/images/AlexNet.ppm
+ :alt: alextnet
+
+As shown in the figure, AlexNet consists of five convolutional layers in
+series with three fully-connected layers. We construct it by using the
+neural network layer interface provided by ``mindspore.nn``.
+
+.. code::
+
+ from mindspore import nn
+
+The following code shows how to quickly construct AlexNet by using
+``nn.Cell``.
+
+- Top-level neural networks inherit from ``nn.Cell`` as a nested
+ structure.
+- Each neural network layer is a subclass of ``nn.Cell``.
+- ``nn.SequentialCell`` can be simplified when defining models for
+ sequential structures.
+
+.. code:: python
+
+ class AlexNet(nn.Cell):
+ def __init__(self, num_classes=1000, dropout=0.5):
+ super().__init__()
+ self.features = nn.SequentialCell(
+ nn.Conv2d(3, 64, kernel_size=11, stride=4, pad_mode='pad', padding=2),
+ nn.ReLU(),
+ nn.MaxPool2d(kernel_size=3, stride=2),
+ nn.Conv2d(64, 192, kernel_size=5, pad_mode='pad', padding=2),
+ nn.ReLU(),
+ nn.MaxPool2d(kernel_size=3, stride=2),
+ nn.Conv2d(192, 384, kernel_size=3, pad_mode='pad', padding=1),
+ nn.ReLU(),
+ nn.Conv2d(384, 256, kernel_size=3, pad_mode='pad', padding=1),
+ nn.ReLU(),
+ nn.Conv2d(256, 256, kernel_size=3, pad_mode='pad', padding=1),
+ nn.ReLU(),
+ nn.MaxPool2d(kernel_size=3, stride=2),
+ )
+ self.classifier = nn.SequentialCell(
+ nn.Dropout(1-dropout),
+ nn.Dense(256 * 6 * 6, 4096),
+ nn.ReLU(),
+ nn.Dropout(1-dropout),
+ nn.Dense(4096, 4096),
+ nn.ReLU(),
+ nn.Dense(4096, num_classes),
+ )
+
+ def construct(self, x):
+ x = self.features(x)
+ x = x.view(x.shape[0], 256 * 6 * 6)
+ x = self.classifier(x)
+ return x
+
+In the process of defining a model, the ``construct`` method can be used within Python syntax for any construction of the model
+structure, such as conditional, looping, and other control flow
+statements. However, when compiling Just In Time, the syntax needs to
+be parsed by the compiler. For a syntax restriction, refer to:
+`Static diagram syntax
+support `_ .
+
+After completing the model construction, we construct a single sample of
+data and send it to the instantiated AlexNet to find the positive
+results.
+
+.. code:: python
+
+ import numpy as np
+ import mindspore
+ from mindspore import Tensor
+
+ x = Tensor(np.random.randn(1, 3, 224, 224), mindspore.float32)
+
+.. code:: python
+
+ network = AlexNet()
+ logits = network(x)
+ print(logits.shape)
+
+.. parsed-literal::
+
+ (1, 1000)
+
+More Usage Scenarios
+---------------------
+
+In addition to the basic network structure construction, we introduce
+the neural network layer (Layer), loss function (Loss) and optimizer
+(Optimizer), the parameters (Parameter) required by the neural network
+layer and the construction of its initialization method (Initializer),
+and other scenarios respectively in detail.
+
+- `Cell and
+ Parameters `__
+- `Parameter
+ initialization `__
+- `Loss
+ function `__
+- `Optimizer `__
+
diff --git a/tutorials/source_en/advanced/modules/optim.md b/tutorials/source_en/advanced/modules/optimizer.md
similarity index 98%
rename from tutorials/source_en/advanced/modules/optim.md
rename to tutorials/source_en/advanced/modules/optimizer.md
index c458c3abdbc49b9456470a580a0b547d40197c22..2ae829fc0ff7758039876dbda3808a634309901d 100644
--- a/tutorials/source_en/advanced/modules/optim.md
+++ b/tutorials/source_en/advanced/modules/optimizer.md
@@ -1,4 +1,4 @@
-
+
# Optimizer
diff --git a/tutorials/source_en/advanced/modules/parameter.md b/tutorials/source_en/advanced/modules/parameter.md
deleted file mode 100644
index 4afaf2a5e929d477b5dea6586d0e80679ee10a6e..0000000000000000000000000000000000000000
--- a/tutorials/source_en/advanced/modules/parameter.md
+++ /dev/null
@@ -1,314 +0,0 @@
-# Network Arguments
-
-
-
-MindSpore provides initialization modules for parameters and network arguments. You can initialize network arguments by encapsulating operators to call character strings, Initializer subclasses, or customized tensors.
-
-In the following figure, a blue box indicates a specific execution operator, and a green box indicates a tensor. As the data in the neural network model, the tensor continuously flows in the network, including the data input of the network model and the input and output data of the operator. A red box indicates a parameter which is used as a attribute of the network model or operators in the model or as an intermediate parameter and temporary parameter generated in the backward graph.
-
-
-
-The following describes the data type (`dtype`), parameter (`Parameter`), parameter tuple (`ParameterTuple`), network initialization method, and network argument update.
-
-## dtype
-
-MindSpore tensors support different data types, including int8, int16, int32, int64, uint8, uint16, uint32, uint64, float16, float32, float64, and Boolean. These data types correspond to those of NumPy. For details about supported data types, visit [mindspore.dtype](https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype).
-
-In the computation process of MindSpore, the int data type in Python is converted into the defined int64 type, and the float data type is converted into the defined float32 type.
-
-In the following code, the data type of MindSpore is int32.
-
-```python
-import mindspore as ms
-
-data_type = ms.int32
-print(data_type)
-```
-
-```text
- Int32
-```
-
-### Data Type Conversion API
-
-MindSpore provides the following APIs for conversion between NumPy data types and Python built-in data types:
-
-- `dtype_to_nptype`: converts the data type of MindSpore to the corresponding data type of NumPy.
-- `dtype_to_pytype`: converts the data type of MindSpore to the corresponding built-in data type of Python.
-- `pytype_to_dtype`: converts the built-in data type of Python to the corresponding data type of MindSpore.
-
-The following code implements the conversion between different data types and prints the converted type.
-
-```python
-import mindspore as ms
-
-np_type = ms.dtype_to_nptype(ms.int32)
-ms_type = ms.pytype_to_dtype(int)
-py_type = ms.dtype_to_pytype(ms.float64)
-
-print(np_type)
-print(ms_type)
-print(py_type)
-```
-
-```text
-
- Int64
-
-```
-
-## Parameter
-
-A [Parameter](https://www.mindspore.cn/docs/en/master/api_python/mindspore/mindspore.Parameter.html#mindspore.Parameter) of MindSpore indicates an argument that needs to be updated during network training. For example, the most common parameters of the `nn.conv` operator during forward computation include `weight` and `bias`. During backward graph build and backward propagation computation, many intermediate parameters are generated to temporarily store first-step information and intermediate output values.
-
-### Parameter Initialization
-
-There are many methods for initializing `Parameter`, which can receive different data types such as `Tensor` and `Initializer`.
-
-- `default_input`: input data. Four data types are supported: `Tensor`, `Initializer`, `int`, and `float`.
-- `name`: name of a parameter, which is used to distinguish the parameter from other parameters on the network.
-- `requires_grad`: indicates whether to compute the argument gradient during network training. If the argument gradient does not need to be computed, set `requires_grad` to `False`.
-
-In the following sample code, the `int` or `float` data type is used to directly create a parameter:
-
-```python
-import mindspore as ms
-
-x = ms.Parameter(default_input=2.0, name='x')
-y = ms.Parameter(default_input=5.0, name='y')
-z = ms.Parameter(default_input=5, name='z', requires_grad=False)
-
-print(type(x))
-print(x, "value:", x.asnumpy())
-print(y, "value:", y.asnumpy())
-print(z, "value:", z.asnumpy())
-```
-
-```text
-
- Parameter (name=x, shape=(), dtype=Float32, requires_grad=True) value: 2.0
- Parameter (name=y, shape=(), dtype=Float32, requires_grad=True) value: 5.0
- Parameter (name=z, shape=(), dtype=Int32, requires_grad=False) value: 5
-```
-
-In the following code, a MindSpore `Tensor` is used to create a parameter:
-
-```python
-import numpy as np
-import mindspore as ms
-
-my_tensor = ms.Tensor(np.arange(2 * 3).reshape((2, 3)))
-x = ms.Parameter(default_input=my_tensor, name="tensor")
-
-print(x)
-```
-
-```text
- Parameter (name=tensor, shape=(2, 3), dtype=Int64, requires_grad=True)
-```
-
-In the following code example, `Initializer` is used to create a parameter:
-
-```python
-from mindspore.common.initializer import initializer as init
-import mindspore as ms
-
-x = ms.Parameter(default_input=init('ones', [1, 2, 3], ms.float32), name='x')
-print(x)
-```
-
-```text
- Parameter (name=x, shape=(1, 2, 3), dtype=Float32, requires_grad=True)
-```
-
-### Attribute
-
-The default attributes of a `Parameter` include `name`, `shape`, `dtype`, and `requires_grad`.
-
-The following example describes how to initialize a `Parameter` by using a `Tensor` and obtain the attributes of the `Parameter`. The sample code is as follows:
-
-```python
-my_tensor = ms.Tensor(np.arange(2 * 3).reshape((2, 3)))
-x = ms.Parameter(default_input=my_tensor, name="x")
-
-print("x: ", x)
-print("x.data: ", x.data)
-```
-
-```text
- x: Parameter (name=x, shape=(2, 3), dtype=Int64, requires_grad=True)
- x.data: Parameter (name=x, shape=(2, 3), dtype=Int64, requires_grad=True)
-```
-
-### Parameter Operations
-
-1. `clone`: clones a tensor `Parameter`. After the cloning is complete, you can specify a new name for the new `Parameter`.
-
- ```python
- x = ms.Parameter(default_input=init('ones', [1, 2, 3], ms.float32))
- x_clone = x.clone()
- x_clone.name = "x_clone"
-
- print(x)
- print(x_clone)
- ```
-
- ```text
- Parameter (name=Parameter, shape=(1, 2, 3), dtype=Float32, requires_grad=True)
- Parameter (name=x_clone, shape=(1, 2, 3), dtype=Float32, requires_grad=True)
- ```
-
-2. `set_data`: modifies the data or `shape` of the `Parameter`.
-
- The `set_data` method has two input parameters: `data` and `slice_shape`. The `data` indicates the newly input data of the `Parameter`. The `slice_shape` indicates whether to change the `shape` of the `Parameter`. The default value is False.
-
- ```python
- x = ms.Parameter(ms.Tensor(np.ones((1, 2)), ms.float32), name="x", requires_grad=True)
- print(x, x.asnumpy())
-
- y = x.set_data(ms.Tensor(np.zeros((1, 2)), ms.float32))
- print(y, y.asnumpy())
-
- z = x.set_data(ms.Tensor(np.ones((1, 4)), ms.float32), slice_shape=True)
- print(z, z.asnumpy())
- ```
-
- ```text
- Parameter (name=x, shape=(1, 2), dtype=Float32, requires_grad=True) [[1. 1.]]
- Parameter (name=x, shape=(1, 2), dtype=Float32, requires_grad=True) [[0. 0.]]
- Parameter (name=x, shape=(1, 4), dtype=Float32, requires_grad=True) [[1. 1. 1. 1.]]
- ```
-
-3. `init_data`: In parallel scenarios, the shape of a argument changes. You can call the `init_data` method of `Parameter` to obtain the original data.
-
- ```python
- x = ms.Parameter(ms.Tensor(np.ones((1, 2)), ms.float32), name="x", requires_grad=True)
-
- print(x.init_data(), x.init_data().asnumpy())
- ```
-
- ```text
- Parameter (name=x, shape=(1, 2), dtype=Float32, requires_grad=True) [[1. 1.]]
- ```
-
-### Updating Parameters
-
-MindSpore provides the network argument update function. You can use `nn.ParameterUpdate` to update network arguments. The input argument type must be tensor, and the tensor `shape` must be the same as the original network argument `shape`.
-
-The following is an example of updating the weight arguments of a network:
-
-```python
-import numpy as np
-import mindspore as ms
-from mindspore import nn
-
-# Build a network.
-network = nn.Dense(3, 4)
-
-# Obtain the weight argument of a network.
-param = network.parameters_dict()['weight']
-print("Parameter:\n", param.asnumpy())
-
-# Update the weight argument.
-update = nn.ParameterUpdate(param)
-weight = ms.Tensor(np.arange(12).reshape((4, 3)), ms.float32)
-output = update(weight)
-print("Parameter update:\n", output)
-```
-
-```text
- Parameter:
- [[-0.0164615 -0.01204428 -0.00813806]
- [-0.00270927 -0.0113328 -0.01384139]
- [ 0.00849093 0.00351116 0.00989969]
- [ 0.00233028 0.00649209 -0.0021333 ]]
- Parameter update:
- [[ 0. 1. 2.]
- [ 3. 4. 5.]
- [ 6. 7. 8.]
- [ 9. 10. 11.]]
-```
-
-## Parameter Tuple
-
-The [ParameterTuple](https://www.mindspore.cn/docs/en/master/api_python/mindspore/mindspore.ParameterTuple.html#mindspore.ParameterTuple) is used to store multiple `Parameter`s. It is inherited from the `tuple` and provides the clone function.
-
-The following example describes how to create a `ParameterTuple`:
-
-```python
-import numpy as np
-import mindspore as ms
-from mindspore.common.initializer import initializer
-
-# Create.
-x = ms.Parameter(default_input=ms.Tensor(np.arange(2 * 3).reshape((2, 3))), name="x")
-y = ms.Parameter(default_input=initializer('ones', [1, 2, 3], ms.float32), name='y')
-z = ms.Parameter(default_input=2.0, name='z')
-params = ms.ParameterTuple((x, y, z))
-
-# Clone from params and change the name to "params_copy".
-params_copy = params.clone("params_copy")
-
-print(params)
-print(params_copy)
-```
-
-```text
- (Parameter (name=x, shape=(2, 3), dtype=Int64, requires_grad=True), Parameter (name=y, shape=(1, 2, 3), dtype=Float32, requires_grad=True), Parameter (name=z, shape=(), dtype=Float32, requires_grad=True))
- (Parameter (name=params_copy.x, shape=(2, 3), dtype=Int64, requires_grad=True), Parameter (name=params_copy.y, shape=(1, 2, 3), dtype=Float32, requires_grad=True), Parameter (name=params_copy.z, shape=(), dtype=Float32, requires_grad=True))
-```
-
-## Initializing Network Arguments
-
-MindSpore provides multiple network argument initialization modes and encapsulates the argument initialization function in some operators. The following uses the `Conv2d` operator as an example to describe how to use the `Initializer` subclass, character string, and customized `Tensor` to initialize network arguments.
-
-### Initializer
-
-Use `Initializer` to initialize network arguments. The sample code is as follows:
-
-```python
-import numpy as np
-import mindspore.nn as nn
-import mindspore as ms
-from mindspore.common import initializer as init
-
-ms.set_seed(1)
-
-input_data = ms.Tensor(np.ones([1, 3, 16, 50], dtype=np.float32))
-# Convolutional layer. The number of input channels is 3, the number of output channels is 64, the size of the convolution kernel is 3 x 3, and the weight argument is a random number generated in normal distribution.
-net = nn.Conv2d(3, 64, 3, weight_init=init.Normal(0.2))
-# Network output
-output = net(input_data)
-```
-
-### Character String Initialization
-
-Use a character string to initialize network arguments. The content of the character string must be the same as the `Initializer` name (case insensitive). If the character string is used for initialization, the default arguments in the `Initializer` class are used. For example, using the character string `Normal` is equivalent to using `Normal()` of `Initializer`. The following is an example:
-
-```python
-import numpy as np
-import mindspore.nn as nn
-import mindspore as ms
-
-ms.set_seed(1)
-
-input_data = ms.Tensor(np.ones([1, 3, 16, 50], dtype=np.float32))
-net = nn.Conv2d(3, 64, 3, weight_init='Normal')
-output = net(input_data)
-```
-
-### Tensor Initialization
-
-You can also customize a `Tensor` to initialize the arguments of operators in the network model. The sample code is as follows:
-
-```python
-import numpy as np
-import mindspore.nn as nn
-import mindspore as ms
-
-init_data = ms.Tensor(np.ones([64, 3, 3, 3]), dtype=ms.float32)
-input_data = ms.Tensor(np.ones([1, 3, 16, 50], dtype=np.float32))
-
-net = nn.Conv2d(3, 64, 3, weight_init=init_data)
-output = net(input_data)
-```
diff --git a/tutorials/source_zh_cn/advanced/modules.rst b/tutorials/source_zh_cn/advanced/modules.rst
index a8a157636cf9c0a360318b411ff2ea035e2bbd2f..631db44a044ab0a45f984e9ad1adc370c0cbd005 100644
--- a/tutorials/source_zh_cn/advanced/modules.rst
+++ b/tutorials/source_zh_cn/advanced/modules.rst
@@ -25,8 +25,6 @@
.. figure:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/tutorials/source_zh_cn/advanced/modules/images/AlexNet.ppm
:alt: alextnet
- alextnet
-
如图所示,AlexNet由5个卷积层与3个全连接层串联构成,我们使用\ ``mindspore.nn``\ 提供的神经网络层接口进行构造。
.. code:: python