diff --git a/docs/federated/docs/source_en/deploy_federated_server.md b/docs/federated/docs/source_en/deploy_federated_server.md
index 6261e70e85f69a846ca55666ed89a6dea7c97d62..69d321dba6ac8b905e4a97ee212cfe7aa9caf09f 100644
--- a/docs/federated/docs/source_en/deploy_federated_server.md
+++ b/docs/federated/docs/source_en/deploy_federated_server.md
@@ -1,14 +1,14 @@
-# Cloud-based Deployment
+# Horizontal Federated Cloud-based Deployment
-The following uses LeNet as an example to describe how to use MindSpore to deploy a federated learning cluster.
+The following uses LeNet as an example to describe how to use MindSpore Federated to deploy a horizontal federated learning cluster.
The following figure shows the physical architecture of the MindSpore Federated Learning (FL) Server cluster:

-As shown in the preceding figure, in the federated learning cloud cluster, there are three MindSpore process roles: `Federated Learning Scheduler`, `Federated Learning Server` and `Federated Learning Worker`:
+As shown in the preceding figure, in the horizontal federated learning cloud cluster, there are three MindSpore process roles: `Federated Learning Scheduler`, `Federated Learning Server` and `Federated Learning Worker`:
- Federated Learning Scheduler
@@ -39,7 +39,7 @@ As shown in the preceding figure, in the federated learning cloud cluster, there
### Installing MindSpore
-The MindSpore federated learning cloud cluster supports deployment on x86 CPU and GPU CUDA hardware platforms. Run commands provided by the [MindSpore Installation Guide](https://www.mindspore.cn/install) to install the latest MindSpore.
+The MindSpore horizontal federated learning cloud cluster supports deployment on x86 CPU and GPU CUDA hardware platforms. Run commands provided by the [MindSpore Installation Guide](https://www.mindspore.cn/install) to install the latest MindSpore.
### Installing MindSpore Federated
diff --git a/docs/federated/docs/source_en/images/HFL_en.png b/docs/federated/docs/source_en/images/HFL_en.png
new file mode 100644
index 0000000000000000000000000000000000000000..f7b5adac95b8dff2fc010fa49607c706b67daab7
Binary files /dev/null and b/docs/federated/docs/source_en/images/HFL_en.png differ
diff --git a/docs/federated/docs/source_en/images/VFL_en.png b/docs/federated/docs/source_en/images/VFL_en.png
new file mode 100644
index 0000000000000000000000000000000000000000..818bb3de139b9ae2d499c18383d08f324e2b634d
Binary files /dev/null and b/docs/federated/docs/source_en/images/VFL_en.png differ
diff --git a/docs/federated/docs/source_en/images/splitnn_pangu_alpha_en.png b/docs/federated/docs/source_en/images/splitnn_pangu_alpha_en.png
new file mode 100644
index 0000000000000000000000000000000000000000..a2c0f6e806a5f5f2b5c3e269ab2810b82fd0da9e
Binary files /dev/null and b/docs/federated/docs/source_en/images/splitnn_pangu_alpha_en.png differ
diff --git a/docs/federated/docs/source_en/index.rst b/docs/federated/docs/source_en/index.rst
index b90ecfa280dbb5eb9a712de643f981cfc717f39a..7eab44a01a6723d6225ef00a0eec52d8f3ec7bd0 100644
--- a/docs/federated/docs/source_en/index.rst
+++ b/docs/federated/docs/source_en/index.rst
@@ -48,7 +48,7 @@ MindSpore Federated Working Process
Identify scenarios where federated learning is used and accumulate local data for federated tasks on the client.
-2. `Model Selection and Client Deployment `_
+2. `Model Selection and Client Deployment `_
Select or develop a model prototype and use a tool to generate a device model that is easy to deploy.
diff --git a/docs/federated/docs/source_en/split_pangu_alpha_application.md b/docs/federated/docs/source_en/split_pangu_alpha_application.md
index 7ca182d829a5517734d297517d95925fbd4ae442..c895e7041db908204f8107830250ecf55fce1242 100644
--- a/docs/federated/docs/source_en/split_pangu_alpha_application.md
+++ b/docs/federated/docs/source_en/split_pangu_alpha_application.md
@@ -8,6 +8,8 @@ With the advancement of hardware computing power and the continuous expansion of
MindSpore Federated provides a vertical federated learning base functional component based on split learning. This sample provides a federated learning training sample for large NLP models by taking the Pangaea alpha model as an example.
+
+
As shown in the figure above, in this case, the Pangaea α model is sliced into three sub-networks, such as Embedding, Backbone and Head. The front-level subnetwork Embedding and the end-level subnetwork Head are deployed in the network domain of participant A, and the Backbone subnetwork containing multi-level Transformer modules is deployed in the network domain of participant B. The Embedding subnetwork and Head subnetwork read the data held by participant A and dominate the training and inference tasks for performing the Pangaea α model.
* In the forward inference stage, Participant A uses the Embedding subnetwork to process the original data and transmits the output Embedding Feature tensor and Attention Mask Feature tensor to Participant B as the input of Participant B Backbone subnetwork. Then, Participant A reads the Hide State Feature tensor output from the Backbone subnetwork as the input of Participant A Head subnetwork, and finally the predicted result or loss value is output by the Head sub-network.
diff --git a/docs/federated/docs/source_zh_cn/deploy_federated_server.md b/docs/federated/docs/source_zh_cn/deploy_federated_server.md
index 8180d1424bd44e7c5848b4d8b3fd868f6296119a..7d32bfb5b72b7cfc96a2b56f5f02178d4e9408f0 100644
--- a/docs/federated/docs/source_zh_cn/deploy_federated_server.md
+++ b/docs/federated/docs/source_zh_cn/deploy_federated_server.md
@@ -8,7 +8,7 @@ MindSpore Federated Learning (FL) Server集群物理架构如图所示:

-如上图所示,在横向联邦学习云侧集群中,有三种角色的MindSpore进程:`Federated Learning Scheduler`、`Federated Learning Server`和`Federated Learning Worker`:
+如上图所示,在横向联邦学习云侧集群中,有三种角色的MindSpore进程:`Federated Learning Scheduler`、`Federated Learning Server`和`Federated Learning Worker`:
- Federated Learning Scheduler