diff --git a/docs/federated/docs/source_en/image_classification_application.md b/docs/federated/docs/source_en/image_classification_application.md
index 5c77e7ac53631c49d6ad2afbb8a6aaf72b294efd..51e7b2935cddf22646908300110f3f420bd30416 100644
--- a/docs/federated/docs/source_en/image_classification_application.md
+++ b/docs/federated/docs/source_en/image_classification_application.md
@@ -307,7 +307,7 @@ Currently, the `3500_clients_bin` folder contains data of 3500 clients. This scr
The following figure shows the accuracy of the test dataset for federated learning on 50 clients (set `server_num` to 16).
-
+
The total number of federated learning iterations is 100, the number of epochs for local training on the client is 20, and the value of batchSize is 32.
diff --git a/docs/federated/docs/source_en/images/lenet_50_clients_acc.png b/docs/federated/docs/source_en/images/lenet_50_clients_acc.png
deleted file mode 100644
index c1282811f7161d77ec2ea563d96983ef293dbf43..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/lenet_50_clients_acc.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/split_wnd_application.md b/docs/federated/docs/source_en/split_wnd_application.md
index a75fa829d9d690ae94bd30765f0b8485b53e4b55..bad52fa81570a8bf532c9d94995df97b9534bee5 100644
--- a/docs/federated/docs/source_en/split_wnd_application.md
+++ b/docs/federated/docs/source_en/split_wnd_application.md
@@ -10,12 +10,18 @@ Vertical FL model training scenarios: including two stages of forward propagatio
Forward propagation: After the data intersection module processes the parameter-side data and aligns the feature information and label information, the Follower participant inputs the local feature information into the precursor network model, and the feature tensor output from the precursor network model is encrypted/scrambled by the privacy security module and transmitted to the Leader participant by the communication module. The Leader participants input the received feature tensor into the post-level network model, and the predicted values and local label information output from the post-level network model are used as the loss function input to calculate the loss values.
+
+
Backward propagation: The Leader participant calculates the parameter gradient of the backward network model based on the loss value, trains and updates the parameters of the backward network model, and transmits the gradient tensor associated with the feature tensor to the Follower participant by the communication module after encrypted and scrambled by the privacy security module. The Follower participant uses the received gradient tensor for training and update of of frontward network model parameters.
+
+
Vertical FL model inference scenario: similar to the forward propagation phase of the training scenario, but with the predicted values of the backward network model directly as the output, without calculating the loss values.
## Network and Data
+
+
This sample provides a federated learning training example for recommendation-oriented tasks by using Wide&Deep network and Criteo dataset as examples. As shown above, in this case, the vertical federated learning system consists of the Leader participant and the Follower participant. Among them, the Leader participant holds 20×2 dimensional feature information and label information, and the Follower participant holds 19×2 dimensional feature information. Leader participant and Follower participant deploy 1 set of Wide&Deep network respectively, and realize the collaborative training of the network model by exchanging embedding vectors and gradient vectors without disclosing the original features and label information.
For a detailed description of the principle properties of Wide&Deep networks, see [MindSpore ModelZoo - Wide&Deep - Wide&Deep Overview](https://gitee.com/mindspore/models/blob/master/official/recommend/wide_and_deep/README.md#widedeep-description) and its [research paper](https://arxiv.org/pdf/1606.07792.pdf).
diff --git a/docs/mindspore/source_en/note/static_graph_syntax_support.md b/docs/mindspore/source_en/note/static_graph_syntax_support.md
index f6d7f72cd295fa3a9088e9f71f9e6255277c4c88..588871ea84b89e375b05e0a6a2ffd815297fedf4 100644
--- a/docs/mindspore/source_en/note/static_graph_syntax_support.md
+++ b/docs/mindspore/source_en/note/static_graph_syntax_support.md
@@ -775,9 +775,7 @@ Parameter: `cond` -- Variables of `Bool` type and constants of `Bool`, `List`, `
Restrictions:
-- If `cond` is not a constant, the variable or constant assigned to a same sign in different branches should have same data type.If the data type of assigned variables or constants is `Tensor`, the variables and constants should have same shape and element type.
-
-- The number of `if` cannot exceed 100.
+- If `cond` is not a constant, the variable or constant assigned to a same sign in different branches should have same data type. If the data type of assigned variables or constants is `Tensor`, the variables and constants should have same shape and element type. For shape consistency restrictions, please refer to [ShapeJoin Rules](https://www.mindspore.cn/tutorials/experts/en/master/network/control_flow.html#shapejoin-rules).
Example 1:
@@ -930,14 +928,12 @@ Parameter: `cond` -- Variables of `Bool` type and constants of `Bool`, `List`, `
Restrictions:
-- If `cond` is not a constant, the variable or constant assigned to a same sign inside body of `while` and outside body of `while` should have same data type.If the data type of assigned variables or constants is `Tensor`, the variables and constants should have same shape and element type.
+- If `cond` is not a constant, the variable or constant assigned to a same sign inside body of `while` and outside body of `while` should have same data type.If the data type of assigned variables or constants is `Tensor`, the variables and constants should have same shape and element type. For shape consistency restrictions, please refer to [ShapeJoin Rules](https://www.mindspore.cn/tutorials/experts/en/master/network/control_flow.html#shapejoin-rules).
- The `while...else...` statement is not supported.
- If `cond` is not a constant, in while body, the data with type of `Number`, `List`, `Tuple` are not allowed to update and the shape of `Tensor` data are not allowed to change.
-- The number of `while` cannot exceed 100.
-
Example 1:
```python
diff --git a/docs/recommender/docs/source_en/images/offline_training.png b/docs/recommender/docs/source_en/images/offline_training.png
index 8d0993a881318d0a5b802973187ac2aad327f7a1..41eac8a2105a981227866823e209cd04e8ccb391 100644
Binary files a/docs/recommender/docs/source_en/images/offline_training.png and b/docs/recommender/docs/source_en/images/offline_training.png differ
diff --git a/docs/recommender/docs/source_en/images/online_training.png b/docs/recommender/docs/source_en/images/online_training.png
index 40b43f66b44d51d33723a2bb1de4515168ed502a..230b248e36abb4db7acf1a7d42524ccf1a03a0da 100644
Binary files a/docs/recommender/docs/source_en/images/online_training.png and b/docs/recommender/docs/source_en/images/online_training.png differ
diff --git a/docs/recommender/docs/source_en/index.rst b/docs/recommender/docs/source_en/index.rst
index d36b23bfe34d8e73d9a316f0cb1742f1fdc34485..5b25cd96876d85e5820fbe943c3d874103f1ed0a 100644
--- a/docs/recommender/docs/source_en/index.rst
+++ b/docs/recommender/docs/source_en/index.rst
@@ -1,11 +1,18 @@
MindSpore Recommender Documents
================================
-MindSpore Recommender是一个构建在MindSpore框架基础上,面向推荐领域的开源训练加速库,通过MindSpore大规模的异构计算加速能力,MindSpore Recommender支持在线以及离线场景大规模动态特征的高效训练。
+MindSpore Recommender is an open source training acceleration library based on the MindSpore framework for the recommendation domain. With MindSpore's large-scale heterogeneous computing acceleration capability, MindSpore Recommender supports efficient training of large-scale dynamic features for online and offline scenarios.
.. raw:: html
-
+