diff --git a/docs/devtoolkit/docs/Makefile b/docs/devtoolkit/docs/Makefile
deleted file mode 100644
index 1eff8952707bdfa503c8d60c1e9a903053170ba2..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/Makefile
+++ /dev/null
@@ -1,20 +0,0 @@
-# Minimal makefile for Sphinx documentation
-#
-
-# You can set these variables from the command line, and also
-# from the environment for the first two.
-SPHINXOPTS ?=
-SPHINXBUILD ?= sphinx-build
-SOURCEDIR = source_zh_cn
-BUILDDIR = build_zh_cn
-
-# Put it first so that "make" without argument is like "make help".
-help:
- @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
-
-.PHONY: help Makefile
-
-# Catch-all target: route all unknown targets to Sphinx using the new
-# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
-%: Makefile
- @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
diff --git a/docs/devtoolkit/docs/requirements.txt b/docs/devtoolkit/docs/requirements.txt
deleted file mode 100644
index a1b6a69f6dbd9c6f78710f56889e14f0e85b27f4..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/requirements.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-sphinx == 4.4.0
-docutils == 0.17.1
-myst-parser == 0.18.1
-sphinx_rtd_theme == 1.0.0
-numpy
-IPython
-jieba
diff --git a/docs/devtoolkit/docs/source_en/PyCharm_change_version.md b/docs/devtoolkit/docs/source_en/PyCharm_change_version.md
deleted file mode 100644
index 0098ecec9f755781bbef0a1500bf5e11409d9efc..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_en/PyCharm_change_version.md
+++ /dev/null
@@ -1,38 +0,0 @@
-# API Mapping - API Version Switching
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/PyCharm_change_version.md)
-
-## Overview
-
-API mapping refers to the mapping relationship between PyTorch API and MindSpore API.
-In MindSpore Dev Toolkit, it provides two functions: API mapping search and API mapping scan, and users can freely switch the version of API mapping data.
-
-## API Mapping Data Version Switching
-
-1. When the plug-in starts, it defaults to the same API mapping data version as the current version of the plug-in. The API mapping data version is shown in the lower right. This version number only affects the API mapping functionality of this section and does not change the version of MindSpore in the environment.
-
- 
-
-2. Click the API mapping data version to bring up the selection list. You can choose to switch to other version by clicking on the preset version, or you can choose "other version" to try to switch by inputting other version number.
-
- 
-
-3. Click on any version number to start switching versions. An animation below indicates the switching status.
-
- 
-
-4. If you want to customize the version number, select "other version" in the selection list, enter the version number in the popup box, and click ok to start switching versions. Note: Please input the version number in 2.1 or 2.1.0 format, otherwise there will be no response when you click ok.
-
- 
-
-5. If the switch is successful, the lower right status bar displays the API mapping data version information after the switch.
-
- 
-
-6. If the switch fails, the lower right status bar shows the API mapping data version information before the switch. If the switch fails due to non-existent version number or network error, please check and try again. If you want to see the latest documentation, you can switch to the master version.
-
- 
-
-7. When a customized version number is successfully switched, this version number is added to the list of versions to display.
-
- 
\ No newline at end of file
diff --git a/docs/devtoolkit/docs/source_en/PyCharm_plugin_install.md b/docs/devtoolkit/docs/source_en/PyCharm_plugin_install.md
deleted file mode 100644
index 2850c6f4bd53ee005b6be78f15ccb3d7719a0dc3..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_en/PyCharm_plugin_install.md
+++ /dev/null
@@ -1,13 +0,0 @@
-# PyCharm Plug-in Installation
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/PyCharm_plugin_install.md)
-
-## Installation Steps
-
-1. Obtain [Plug-in Zip package](https://ms-release.obs.cn-north-4.myhuaweicloud.com/2.1.0/IdePlugin/any/MindSpore_Dev_ToolKit-2.1.0.zip).
-2. Start Pycharm and click on the upper left menu bar, select File->Settings->Plugins->Install Plugin from Disk.
- As shown in the figure:
-
- 
-
-3. Select the plug-in zip package.
\ No newline at end of file
diff --git a/docs/devtoolkit/docs/source_en/VSCode_api_scan.md b/docs/devtoolkit/docs/source_en/VSCode_api_scan.md
deleted file mode 100644
index f4c5940a574d63163fc6481b7e32943d30def042..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_en/VSCode_api_scan.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# API Mapping - API Sacnning
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/VSCode_api_scan.md)
-
-## Functions Introduction
-
-* Quickly scan for APIs that appear in your code and display API details directly in the sidebar.
-* For the convenience of users of other machine learning frameworks, the corresponding MindSpore APIs are matched by association by scanning the mainstream framework APIs that appear in the code.
-* The data version of API mapping supports switching. Please refer to the section [API Mapping - Version Switching](https://www.mindspore.cn/devtoolkit/docs/en/master/VSCode_change_version.html) for details.
-
-## File-level API Mapping Scanning
-
-1. Right-click anywhere in the current file to open the menu and select "Scan Local Files".
-
- 
-
-2. The right-hand column will populate with the scanned operators in the current file, including three scanning result list "PyTorch APIs that can be transformed", "Probably the result of torch.Tensor API" and "PyTorch API that does not provide a direct mapping relationship at this time".
-
- where
-
- * "PyTorch APIs that can be transformed" means PyTorch APIs used in the Documentation can be converted to MindSpore APIs.
- * "Probably the result of torch.Tensor API" means APIs with the same name as torch.Tensor, which may be torch.Tensor APIs and can be converted to MindSpore APIs.
- * "PyTorch API that does not provide a direct mapping relationship at this time" means APIs that are PyTorch APIs or possibly torch.Tensor APIs, but don't directly correspond to MindSpore APIs.
-
- 
-
-## Project-level API Mapping Scanning
-
-1. Click the MindSpore API Mapping Scan icon on the left sidebar of Visual Studio Code.
-
- 
-
-2. Generate a project tree view of the current IDE project containing only Python files on the left sidebar.
-
- 
-
-3. If you select a single Python file in the view, you can get a list of operator scan results for that file.
-
- 
-
-4. If you select a file directory in the view, you can get a list of operator scan results for all Python files in that directory.
-
- 
-
-5. The blue font parts are all clickable and will automatically open the page in the user-default browser.
-
- 
-
- 
diff --git a/docs/devtoolkit/docs/source_en/VSCode_api_search.md b/docs/devtoolkit/docs/source_en/VSCode_api_search.md
deleted file mode 100644
index 83b4fb4bcc26ce8d07de01074930a7efca4a4b38..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_en/VSCode_api_search.md
+++ /dev/null
@@ -1,29 +0,0 @@
-# API Mapping - API Search
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/VSCode_api_search.md)
-
-## Function Introduction
-
-* Quickly search for MindSpore APIs and display API details directly in the sidebar.
-* For the convenience of users of other machine learning frameworks, the association matches the corresponding MindSpore API by searching for other mainstream framework APIs.
-* The data version of API mapping supports switching. Please refer to the section [API Mapping - Version Switching](https://www.mindspore.cn/devtoolkit/docs/en/master/VSCode_change_version.html) for details.
-
-## Usage Steps
-
-1. Click the MindSpore API Mapping Search icon on the left sidebar of Visual Studio Code.
-
- 
-
-2. An input box is generated in the left sidebar.
-
- 
-
-3. Enter any word in the input box, the search results for the current keyword will be displayed below, and the search results are updated in real time according to the input content.
-
- 
-
-4. Click on any search result and open the page in the user default browser.
-
- 
-
- 
\ No newline at end of file
diff --git a/docs/devtoolkit/docs/source_en/VSCode_change_version.md b/docs/devtoolkit/docs/source_en/VSCode_change_version.md
deleted file mode 100644
index 6469fa1f78a21af96e7f0fff45f7412af03fca45..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_en/VSCode_change_version.md
+++ /dev/null
@@ -1,39 +0,0 @@
-# API Mapping - Version Switching
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/VSCode_change_version.md)
-
-## Overview
-
-API mapping refers to the mapping relationship between PyTorch API and MindSpore API. In MindSpore Dev Toolkit, it provides two functions: API mapping search and API mapping scan, and users can freely switch the version of API mapping data.
-
-## API Mapping Data Version Switching
-
-1. Different versions of API mapping data will result in different API mapping scans and API mapping search results, but will not affect the version of MindSpore in the environment. The default version is the same as the plugin version, and the version information is displayed in the bottom left status bar.
-
- 
-
-2. Clicking on this status bar will bring up a drop-down box at the top of the page containing options for the default version numbers that can be switched. Users can click on any version number to switch versions, or click on the "Customize Input" option and enter another version number in the pop-up input box to switch versions.
-
- 
-
-3. Click on any version number to start switching versions, and the status bar in the lower left indicates the status of version switching.
-
- 
-
-4. If you want to customize the version number, click the "Customize Input" option in the drop-down box, and the drop-down box will be changed to an input box, enter the version number according to the format of 2.1 or 2.1.0, and then press the Enter key to start switching the version, and the status bar in the lower-left corner will indicate the status of the switching.
-
- 
-
- 
-
-5. If the switch is successful, the message in the lower right indicates that the switch is successful, and the status bar in the lower left displays information about the version of the API mapping data after the switch.
-
- 
-
-6. If the switch fails, the message in the lower right indicates that the switch fails, and the status bar in the lower left shows the API mapping data version information before the switch. If the switch fails due to non-existent version number or network error, please check and try again. If you want to see the latest documentation, you can switch to the master version.
-
- 
-
-7. When the customized version number is switched successfully, this version number is added to the drop-down box for display.
-
- 
\ No newline at end of file
diff --git a/docs/devtoolkit/docs/source_en/VSCode_plugin_install.md b/docs/devtoolkit/docs/source_en/VSCode_plugin_install.md
deleted file mode 100644
index 7f192ae88ac085657900b582550961f6592344eb..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_en/VSCode_plugin_install.md
+++ /dev/null
@@ -1,18 +0,0 @@
-# Visual Studio Code Plug-in Installation
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/VSCode_plugin_install.md)
-
-## Installation Steps
-
-1. Obtain [Plug-in vsix package](https://ms-release.obs.cn-north-4.myhuaweicloud.com/2.1.0/IdePlugin/any/mindspore-dev-toolkit-2.1.0.vsix).
-2. Click the fifth button on the left, "Extensions". Click the three dots in the upper right corner, and then click "Install from VSIX..."
-
- 
-
-3. Select the downloaded vsix file from the folder and the plug-in will be installed automatically. When there says "Completed installing MindSpore Dev Toolkit extension from VSIX" in the bottom right corner, the plug-in is successfully installed.
-
- 
-
-4. Click the refresh button in the left column, and you can see the "MindSpore Dev Toolkit" plug-in in the "INSTALLED" directory. In this way, the plug-in is successfully installed.
-
- 
diff --git a/docs/devtoolkit/docs/source_en/VSCode_smart_completion.md b/docs/devtoolkit/docs/source_en/VSCode_smart_completion.md
deleted file mode 100644
index ad4f7ec61e109e178c86d6a25a3d74509054369d..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_en/VSCode_smart_completion.md
+++ /dev/null
@@ -1,22 +0,0 @@
-# Code Completion
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/VSCode_smart_completion.md)
-
-## Functions Description
-
-* Provide AI code completion based on the MindSpore project.
-* Easily develop MindSpore without installing MindSpore environment.
-
-## Usage Steps
-
-1. When you install or use the plug-in for the first time, the model will be downloaded automatically. "Start Downloading Model" will appear in the lower right corner, and "Download Model Successfully" indicates that the model is downloaded and started successfully. If the Internet speed is slow, it will take more than ten minutes to download the model. The message "Downloaded Model Successfully" will appear only after the download is complete. If this is not the first time you use it, the message will not appear.
-
- 
-
-2. Open the Python file to write the code.
-
- 
-
-3. The completion will take effect automatically when coding. The code with the MindSpore Dev Toolkit suffix name is the code provided by plug-in smart completion.
-
- 
diff --git a/docs/devtoolkit/docs/source_en/api_scanning.md b/docs/devtoolkit/docs/source_en/api_scanning.md
deleted file mode 100644
index 687c6f09bc1a5ac8a3f734bc2391e5a7e2ee46a6..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_en/api_scanning.md
+++ /dev/null
@@ -1,62 +0,0 @@
-# API Mapping - API Scanning
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/api_scanning.md)
-
-## Functions Introduction
-
-* Quickly scan the APIs in the code and display the API details directly in the sidebar.
-* For the convenience of other machine learning framework users, by scanning the mainstream framework APIs that appear in the code, associative matching the corresponding MindSpore API.
-* The data version of API mapping supports switching, and please refer to the section [API Mapping - Version Switching](https://www.mindspore.cn/devtoolkit/docs/en/master/PyCharm_change_version.html) for details.
-
-## Usage Steps
-
-### Document-level API Scanning
-
-1. Right click anywhere in the current file to open the menu, and click "API scan" at the top of the menu.
-
- 
-
-2. The right sidebar will automatically pop up to show the scanned operator and display a detailed list containing the name, URL and other information. If no operator is scanned in this document, no pop-up window will appear.
-
- where:
-
- * "PyTorch/TensorFlow APIs that can be converted to MindSpore APIs" means PyTorch or TensorFlow APIs used in the Documentation that can be converted to MindSpore APIs.
- * "APIs that cannot be converted at this time" means APIs that are PyTorch or TensorFlow APIs but do not have a direct equivalent to MindSpore APIs.
- * "Possible PyTorch/TensorFlow API" refers to a convertible case where there is a possible PyTorch or TensorFlow API because of chained calls.
- * TensorFlow API scanning is an experimental feature.
-
- 
-
-3. Click the blue words, and another column will automatically open at the top to show the page.
-
- 
-
-4. Click the "export" button in the upper right corner to export the content to a csv table.
-
- 
-
-### Project-level API Scanning
-
-1. Right-click anywhere on the current file to open the menu, click the second option "API scan project-level" at the top of the menu, or select "Tools" in the toolbar above, and then select "API scan project-level".
-
- 
-
- 
-
-2. The right sidebar pops up a list of scanned operators from the entire project, and displays a detailed list containing information such as name, URL, etc.
-
- 
-
-3. In the upper box you can select a single file, and in the lower box the operators in this file will be shown separately, and the file selection can be switched at will.
-
- 
-
- 
-
-4. Click the blue words, and another column will automatically open at the top to show the page.
-
- 
-
-5. Click the "export" button in the upper right corner to export the content to a csv table.
-
- 
diff --git a/docs/devtoolkit/docs/source_en/api_search.md b/docs/devtoolkit/docs/source_en/api_search.md
deleted file mode 100644
index fdc498109f16503d20ee17297fe71cb3491de439..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_en/api_search.md
+++ /dev/null
@@ -1,29 +0,0 @@
-# API Mapping - API Search
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/api_search.md)
-
-## Functions
-
-* You can quickly search for MindSpore APIs and view API details in the sidebar.
-* If you use other machine learning frameworks, you can search for APIs of other mainstream frameworks to match MindSpore APIs.
-* The data version of API mapping supports switching, and please refer to the section [API Mapping - Version Switching](https://www.mindspore.cn/devtoolkit/docs/en/master/PyCharm_change_version.html) for details.
-
-## Procedure
-
-1. Double-click **Shift**. The global search page is displayed.
-
- 
-
-2. Click **MindSpore**.
-
- 
-
-3. Search for a PyTorch or TensorFlow API to obtain the mapping between the PyTorch or TensorFlow API and the MindSpore API.
-
- 
-
- 
-
-4. Click an item in the list to view its official document in the sidebar.
-
- 
diff --git a/docs/devtoolkit/docs/source_en/compiling.md b/docs/devtoolkit/docs/source_en/compiling.md
deleted file mode 100644
index b7b531c1ea2459f340cc18c62317014f6cf2faa5..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_en/compiling.md
+++ /dev/null
@@ -1,87 +0,0 @@
-# Source Code Compilation Guide
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/compiling.md)
-
-The following describes how to compile the MindSpore Dev ToolKit project based on the IntelliJ IDEA source code.
-
-## Background
-
-* MindSpore Dev ToolKit is a PyCharm plug-in developed using IntelliJ IDEA. [IntelliJ IDEA](https://www.jetbrains.com/idea/download) and PyCharm are IDEs developed by JetBrains.
-* MindSpore Dev ToolKit is developed based on JDK 11. To learn JDK- and Java-related knowledge, visit [https://jdk.java.net/](https://jdk.java.net/).
-* [Gradle 6.6.1](https://gradle.org) is used to build MindSpore Dev Toolkit and it does not need to be installed in advance. IntelliJ IDEA automatically uses the "gradle wrapper" mechanism to configure the required Gradle based on the code.
-
-## Required Software
-
-* [IntelliJ IDEA](https://www.jetbrains.com/idea/download)
-
-* JDK 11
-
- Note: IntelliJ IDEA 2021.3 contains a built-in JDK named **jbr-11 JetBrains Runtime version 11.0.10**, which can be used directly.
-
- 
-
-## Compilation
-
-1. Verify that the required software has been successfully configured.
-
-2. Download the [project](https://gitee.com/mindspore/ide-plugin) source code from the code repository.
-
- * Download the ZIP package.
-
- 
-
- * Run the git command to download the package.
-
- ```
- git clone https://gitee.com/mindspore/ide-plugin.git
- ```
-
-3. Use IntelliJ IDEA to open the project.
-
- 3.1 Choose **File** > **Open**.
-
- 
-
- 3.2 Go to the directory for storing the project.
-
- 
-
- 3.3 Click **Load** in the dialog box that is displayed in the lower right corner. Alternatively, click **pycharm**, right-click the **settings.gradle** file, and choose **Link Gradle Project** from the shortcut menu.
-
- 
-
- 
-
-4. If the system displays a message indicating that no JDK is available, select a JDK. ***Skip this step if the JDK is available.***
-
- 4.1 The following figure shows the situation when the JDK is not available.
-
- 
-
- 4.2 Choose **File** > **Project Structure**.
-
- 
-
- 4.3 Select JDK 11.
-
- 
-
-5. Wait until the synchronization is complete.
-
- 
-
-6. Build a project.
-
- 
-
-7. Wait till the build is complete.
-
- 
-
-8. Obtain the plug-in installation package from the **/pycharm/build/distributions** directory in the project directory.
-
- 
-
-## References
-
-* This project is built based on section [Building Plugins with Gradle](https://plugins.jetbrains.com/docs/intellij/gradle-build-system.html) in *IntelliJ Platform Plugin SDK*. For details about advanced functions such as debugging, see the official document.
diff --git a/docs/devtoolkit/docs/source_en/conf.py b/docs/devtoolkit/docs/source_en/conf.py
deleted file mode 100644
index 06e4e63c9fe2caa7ae185c9784c961b9dff9a4ac..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_en/conf.py
+++ /dev/null
@@ -1,85 +0,0 @@
-# Configuration file for the Sphinx documentation builder.
-#
-# This file only contains a selection of the most common options. For a full
-# list see the documentation:
-# https://www.sphinx-doc.org/en/master/usage/configuration.html
-
-# -- Path setup --------------------------------------------------------------
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-#
-import os
-import re
-
-# -- Project information -----------------------------------------------------
-
-project = 'MindSpore'
-copyright = 'MindSpore'
-author = 'MindSpore'
-
-# The full version, including alpha/beta/rc tags
-release = 'master'
-
-
-# -- General configuration ---------------------------------------------------
-
-# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
-# ones.
-myst_enable_extensions = ["dollarmath", "amsmath"]
-
-
-myst_heading_anchors = 5
-extensions = [
- 'myst_parser',
- 'sphinx.ext.autodoc'
-]
-
-source_suffix = {
- '.rst': 'restructuredtext',
- '.md': 'markdown',
-}
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = ['_templates']
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-# This pattern also affects html_static_path and html_extra_path.
-mathjax_path = 'https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/mathjax/MathJax-3.2.2/es5/tex-mml-chtml.js'
-
-mathjax_options = {
- 'async':'async'
-}
-
-smartquotes_action = 'De'
-
-exclude_patterns = []
-
-pygments_style = 'sphinx'
-
-# -- Options for HTML output -------------------------------------------------
-
-# The theme to use for HTML and HTML Help pages. See the documentation for
-# a list of builtin themes.
-#
-html_theme = 'sphinx_rtd_theme'
-
-html_search_options = {'dict': '../../../resource/jieba.txt'}
-
-html_static_path = ['_static']
-
-src_release = os.path.join(os.getenv("DT_PATH"), 'RELEASE.md')
-des_release = "./RELEASE.md"
-with open(src_release, "r", encoding="utf-8") as f:
- data = f.read()
-if len(re.findall("\n## (.*?)\n",data)) > 1:
- content = re.findall("(## [\s\S\n]*?)\n## ", data)
-else:
- content = re.findall("(## [\s\S\n]*)", data)
-#result = content[0].replace('# MindSpore', '#', 1)
-with open(des_release, "w", encoding="utf-8") as p:
- p.write("# Release Notes"+"\n\n")
- p.write(content[0])
\ No newline at end of file
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image002.jpg b/docs/devtoolkit/docs/source_en/images/clip_image002.jpg
deleted file mode 100644
index 24132302f1552bed6be56b7dd660625448774680..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image002.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image004.jpg b/docs/devtoolkit/docs/source_en/images/clip_image004.jpg
deleted file mode 100644
index 7ed0e3729940f514a7bfd61c1a7be22166c0bb02..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image004.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image006.jpg b/docs/devtoolkit/docs/source_en/images/clip_image006.jpg
deleted file mode 100644
index e0c323eec249024fe19126ce4c931133564cf7b7..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image006.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image008.jpg b/docs/devtoolkit/docs/source_en/images/clip_image008.jpg
deleted file mode 100644
index a071ad67222931372d4b62f7b0cf334a4015e70d..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image008.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image010.jpg b/docs/devtoolkit/docs/source_en/images/clip_image010.jpg
deleted file mode 100644
index 43ca88d40bc56d5a5113bc29b97a7b559ef659af..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image010.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image012.jpg b/docs/devtoolkit/docs/source_en/images/clip_image012.jpg
deleted file mode 100644
index 0e35c9f219292913a51f1f0d5b7a5e154008620f..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image012.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image014.jpg b/docs/devtoolkit/docs/source_en/images/clip_image014.jpg
deleted file mode 100644
index 794de60c7d7e76a58e8d7212e449a1bd8e194b21..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image014.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image015.jpg b/docs/devtoolkit/docs/source_en/images/clip_image015.jpg
deleted file mode 100644
index 8172e21f871bed6866b3a91b83252838228d257a..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image015.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image016.jpg b/docs/devtoolkit/docs/source_en/images/clip_image016.jpg
deleted file mode 100644
index c836c0cf7898e4757ddb3410dde18e754894dd25..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image016.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image018.jpg b/docs/devtoolkit/docs/source_en/images/clip_image018.jpg
deleted file mode 100644
index 777738f7cf60454b7b5f26e6c5a29ba5d55750ff..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image018.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image019.jpg b/docs/devtoolkit/docs/source_en/images/clip_image019.jpg
deleted file mode 100644
index ab02b702bfd1c0986adb4a15c1f455b56df0a4a1..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image019.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image020.jpg b/docs/devtoolkit/docs/source_en/images/clip_image020.jpg
deleted file mode 100644
index d946c3cb3a851f690a5643b5afe119597aed5b22..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image020.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image021.jpg b/docs/devtoolkit/docs/source_en/images/clip_image021.jpg
deleted file mode 100644
index 74672d9513f4a60f77450ae5516cee2060215241..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image021.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image022.jpg b/docs/devtoolkit/docs/source_en/images/clip_image022.jpg
deleted file mode 100644
index 6b26f18c7d8bb43db0beb8a8d2bd386489192922..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image022.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image023.jpg b/docs/devtoolkit/docs/source_en/images/clip_image023.jpg
deleted file mode 100644
index 5981a0fb25c681417a0bdf24d2392411b41f0faf..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image023.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image024.jpg b/docs/devtoolkit/docs/source_en/images/clip_image024.jpg
deleted file mode 100644
index 505e8b6c5c4d81dfd67d91b1dad08f938444368a..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image024.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image025.jpg b/docs/devtoolkit/docs/source_en/images/clip_image025.jpg
deleted file mode 100644
index 946e276b6a982303b6b146266037a739e6c2639a..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image025.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image026.jpg b/docs/devtoolkit/docs/source_en/images/clip_image026.jpg
deleted file mode 100644
index 8b787215af5cf9ae223c2b33121c3171923d2de2..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image026.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image027.jpg b/docs/devtoolkit/docs/source_en/images/clip_image027.jpg
deleted file mode 100644
index aa4d7d4a8a6b503fe29885368547daa535e34796..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image027.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image028.jpg b/docs/devtoolkit/docs/source_en/images/clip_image028.jpg
deleted file mode 100644
index 3126f80aecac28e8beaa54dc393122c60dbe1357..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image028.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image029.jpg b/docs/devtoolkit/docs/source_en/images/clip_image029.jpg
deleted file mode 100644
index 6587240e4a456f3792fece52bcdcbbed077ca67b..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image029.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image031.jpg b/docs/devtoolkit/docs/source_en/images/clip_image031.jpg
deleted file mode 100644
index 2f829b48e72e62525860cfe599e0a4ada82010ca..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image031.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image032.jpg b/docs/devtoolkit/docs/source_en/images/clip_image032.jpg
deleted file mode 100644
index 37589efbe0f57c442f665824831a2685d81c8713..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image032.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image033.jpg b/docs/devtoolkit/docs/source_en/images/clip_image033.jpg
deleted file mode 100644
index bdca68324cf7ee8f4e9bd18817a82954910e52c9..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image033.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image034.jpg b/docs/devtoolkit/docs/source_en/images/clip_image034.jpg
deleted file mode 100644
index 874b10d4b2ca476da16a4d1e749cdb6b31ecb59e..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image034.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image035.jpg b/docs/devtoolkit/docs/source_en/images/clip_image035.jpg
deleted file mode 100644
index 0b0465169553e57795320255295b8fa789950522..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image035.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image036.jpg b/docs/devtoolkit/docs/source_en/images/clip_image036.jpg
deleted file mode 100644
index c7c6c72819b655884d97637b696d1814e5a7fdbf..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image036.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image037.jpg b/docs/devtoolkit/docs/source_en/images/clip_image037.jpg
deleted file mode 100644
index 531e8184e02c43aa177a51c3cc32355cc3df9d42..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image037.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image038.jpg b/docs/devtoolkit/docs/source_en/images/clip_image038.jpg
deleted file mode 100644
index a8b4d88190c626139bad49cd42a9f7e908b4d0e4..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image038.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image039.jpg b/docs/devtoolkit/docs/source_en/images/clip_image039.jpg
deleted file mode 100644
index 2eab0ceac9c1bd5d8b6ade3d65a6a3ce8b1f8fd4..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image039.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image040.jpg b/docs/devtoolkit/docs/source_en/images/clip_image040.jpg
deleted file mode 100644
index a879fb1f12d8b6c4bd02332abf9b3bd734207763..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image040.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image042.jpg b/docs/devtoolkit/docs/source_en/images/clip_image042.jpg
deleted file mode 100644
index 2454ade258da6d428c9e23ece2adf7f0291d1a12..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image042.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image044.jpg b/docs/devtoolkit/docs/source_en/images/clip_image044.jpg
deleted file mode 100644
index cbff652015c36a5856afc909518f3c0fd22f23ff..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image044.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image046.jpg b/docs/devtoolkit/docs/source_en/images/clip_image046.jpg
deleted file mode 100644
index 58a493ea4f69b264fc69cfd3e34f32d5a171c303..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image046.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image050.jpg b/docs/devtoolkit/docs/source_en/images/clip_image050.jpg
deleted file mode 100644
index 35cc26d483358550c9a53ce855c2ae483eddb7e1..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image050.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image060.jpg b/docs/devtoolkit/docs/source_en/images/clip_image060.jpg
deleted file mode 100644
index 7723975694f7f56d88187a69626343af11efbd23..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image060.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image062.jpg b/docs/devtoolkit/docs/source_en/images/clip_image062.jpg
deleted file mode 100644
index 838bc48ab8d77f7dbba9ca02925838a49b19ce53..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image062.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image064.jpg b/docs/devtoolkit/docs/source_en/images/clip_image064.jpg
deleted file mode 100644
index fb39e70b78b45af301973ea802a219c482a21590..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image064.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image066.jpg b/docs/devtoolkit/docs/source_en/images/clip_image066.jpg
deleted file mode 100644
index 0a596cfb3ef7a79674ff33a7be5c97859cc2b9c4..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image066.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image068.jpg b/docs/devtoolkit/docs/source_en/images/clip_image068.jpg
deleted file mode 100644
index 0023ba9236a768001e462d6a10434719f3f733fd..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image068.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image072.jpg b/docs/devtoolkit/docs/source_en/images/clip_image072.jpg
deleted file mode 100644
index d1e5fad4192d4cb5821cafe4031dbc4ff599eccd..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image072.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image074.jpg b/docs/devtoolkit/docs/source_en/images/clip_image074.jpg
deleted file mode 100644
index 97fa2b21b4029ff75156893f8abdff2e77aa38bd..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image074.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image076.jpg b/docs/devtoolkit/docs/source_en/images/clip_image076.jpg
deleted file mode 100644
index e754c7dcd30ee6fa82ab20bde4d66f69aabe2fa7..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image076.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image088.jpg b/docs/devtoolkit/docs/source_en/images/clip_image088.jpg
deleted file mode 100644
index 8b85f0727893c4cf6cd258550466ca4f4a340e6e..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image088.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image090.jpg b/docs/devtoolkit/docs/source_en/images/clip_image090.jpg
deleted file mode 100644
index a3f405388fd75b23b652bc86475be5fd5e1f48ac..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image090.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image092.jpg b/docs/devtoolkit/docs/source_en/images/clip_image092.jpg
deleted file mode 100644
index 68ca9c66fc3f03760873075af20c8a9e28aaab48..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image092.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image093.jpg b/docs/devtoolkit/docs/source_en/images/clip_image093.jpg
deleted file mode 100644
index 594b2ceadb2c27290e0339e14b298fa2feffe6a9..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image093.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image094.jpg b/docs/devtoolkit/docs/source_en/images/clip_image094.jpg
deleted file mode 100644
index e931a95180d27d55590e73948ebe80a1f81bede1..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image094.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/images/clip_image096.jpg b/docs/devtoolkit/docs/source_en/images/clip_image096.jpg
deleted file mode 100644
index 3ed0c88500bb4caffccea4d08aaa3a6310e177bd..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_en/images/clip_image096.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_en/index.rst b/docs/devtoolkit/docs/source_en/index.rst
deleted file mode 100644
index a5723cfae54780d9911757eadbe14d0991952572..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_en/index.rst
+++ /dev/null
@@ -1,61 +0,0 @@
-MindSpore Dev Toolkit
-============================
-
-MindSpore Dev Toolkit is a development kit supporting the `PyCharm `_ (cross-platform Python IDE) plug-in developed by MindSpore, and provides functions such as `Project creation `_ , `intelligent supplement `_ , `API search `_ , and `Document search `_ .
-
-MindSpore Dev Toolkit creates the best intelligent computing experience, improve the usability of the MindSpore framework, and facilitate the promotion of the MindSpore ecosystem through deep learning, intelligent search, and intelligent recommendation.
-
-Code repository address:
-
-System Requirements
-------------------------------
-
-- Operating systems supported by the plug-in:
-
- - Windows 10
-
- - Linux
-
- - macOS (Only the x86 architecture is supported. The code completion function is not available currently.)
-
-- PyCharm versions supported by the plug-in:
-
- - 2020.3
-
- - 2021.X
-
- - 2022.X
-
-.. toctree::
- :glob:
- :maxdepth: 1
- :caption: PyCharm Plugin Usage Guide
- :hidden:
-
- PyCharm_plugin_install
- compiling
- smart_completion
- PyCharm_change_version
- api_search
- api_scanning
- knowledge_search
- mindspore_project_wizard
-
-.. toctree::
- :glob:
- :maxdepth: 1
- :caption: VSCode Plugin Usage Guide
- :hidden:
-
- VSCode_plugin_install
- VSCode_smart_completion
- VSCode_change_version
- VSCode_api_search
- VSCode_api_scan
-
-.. toctree::
- :glob:
- :maxdepth: 1
- :caption: RELEASE NOTES
-
- RELEASE
diff --git a/docs/devtoolkit/docs/source_en/knowledge_search.md b/docs/devtoolkit/docs/source_en/knowledge_search.md
deleted file mode 100644
index 91a0606a577644b0a18684e55f0197b05bb72cfb..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_en/knowledge_search.md
+++ /dev/null
@@ -1,22 +0,0 @@
-# Document Search
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/knowledge_search.md)
-
-## Functions
-
-* Recommendation: It provides exact search results based on user habits.
-* It provides immersive document search experience to avoid switching between the IDE and browser. It resides on the sidebar to adapt to the page layout.
-
-## Procedure
-
-1. Click the sidebar to display the search page.
-
- 
-
-2. Enter **API Mapping** and click the search icon to view the result.
-
- 
-
-3. Click the home icon to return to the search page.
-
- 
diff --git a/docs/devtoolkit/docs/source_en/mindspore_project_wizard.md b/docs/devtoolkit/docs/source_en/mindspore_project_wizard.md
deleted file mode 100644
index b26c4cc49d24905c229a2a1c32d71f05a7f8fd30..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_en/mindspore_project_wizard.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Creating a Project
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/mindspore_project_wizard.md)
-
-## Background
-
-This function is implemented based on the [conda](https://conda.io). Conda is a package management and environment management system. It is one of the recommended installation modes for MindSpore.
-
-## Functions
-
-* It creates a conda environment or selects an existing conda environment, and installs the MindSpore binary package in the conda environment.
-* It deploys the best practice template. In addition to testing whether the environment is successfully installed, it also provides a tutorial for MindSpore beginners.
-* When the network condition is good, the environment can be installed within 10 minutes and you can experience MindSpore immediately. It reduces up to 80% environment configuration time for beginners.
-
-## Procedure
-
-1. Choose **File** > **New Project**.
-
- 
-
-2. Select **MindSpore**.
-
- 
-
-3. Download and install Miniconda. ***If conda has been installed, skip this step.***
-
- 3.1 Click **Install Miniconda Automatically**.
-
- 
-
- 3.2 Select an installation folder. **You are advised to use the default path to install conda.**
-
- 
-
- 3.3 Click **Install**.
-
- 
-
- 
-
- 3.4 Wait for the installation to complete.
-
- 
-
- 3.5 Restart PyCharm as prompted or restart PyCharm later. ***Note: The following steps can be performed only after PyCharm is restarted.***
-
- 
-
-4. If **Conda executable** is not automatically filled, select the path of the installed conda.
-
- 
-
-5. Create a conda environment or select an existing conda environment.
-
- * Create a conda environment. **You are advised to use the default path to create the conda environment. Due to PyCharm restrictions on Linux, you can only select the default directory.**
-
- 
-
- * Select an existing conda environment in PyCharm.
-
- 
-
-6. Select a hardware environment and a MindSpore best practice template.
-
- 6.1 Select a hardware environment.
-
- 
-
- 6.2 Select a best practice template. The best practice template provides some sample projects for beginners to get familiar with MindSpore. The best practice template can be run directly.
-
- 
-
-7. Click **Create** to create a project and wait until MindSpore is successfully downloaded and installed.
-
- 7.1 Click **Create** to create a MindSpore project.
-
- 
-
- 7.2 The conda environment is being created.
-
- 
-
- 7.3 MindSpore is being configured through conda.
-
- 
-
-8. Wait till the MindSpore project is created.
-
- 
-
-9. Check whether the MindSpore project is successfully created.
-
- * Click **Terminal**, enter **python -c "import mindspore;mindspore.run_check()"**, and check the output. If the version number shown in the following figure is displayed, the MindSpore environment is available.
-
- 
-
- * If you select a best practice template, you can run the best practice to test the MindSpore environment.
-
- 
-
- 
-
- 
diff --git a/docs/devtoolkit/docs/source_en/smart_completion.md b/docs/devtoolkit/docs/source_en/smart_completion.md
deleted file mode 100644
index a3082c79785732ff85a829618bce1c5c8ca72f11..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_en/smart_completion.md
+++ /dev/null
@@ -1,36 +0,0 @@
-# Code Completion
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/smart_completion.md)
-
-## Functions
-
-* It completes code based on AI for the MindSpore project.
-* You can easily develop MindSpore without installing the MindSpore environment.
-
-## Procedure
-
-1. Open a Python file and write code.
-
- 
-
-2. During encoding, the code completion function is enabled automatically. Code lines with the "MindSpore" identifier are automatically completed by MindSpore Dev Toolkit.
-
- 
-
- 
-
-## Description
-
-1. In versions later than PyCharm 2021, the completed code will be rearranged based on machine learning. This behavior may cause the plug-in's completed code to be displayed with lower priority. You can disable this function in **Settings** and use MindSpore Dev Toolkit to sort code.
-
- 
-
-2. Comparison before and after this function is disabled.
-
- * Function disabled
-
- 
-
- * Function enabled
-
- 
diff --git a/docs/devtoolkit/docs/source_zh_cn/PyCharm_change_version.md b/docs/devtoolkit/docs/source_zh_cn/PyCharm_change_version.md
deleted file mode 100644
index d0d553574bedc10d411ff45310718a05903b981d..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_zh_cn/PyCharm_change_version.md
+++ /dev/null
@@ -1,39 +0,0 @@
-# API映射 - API版本切换
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/PyCharm_change_version.md)
-
-## 简介
-
-API 映射指PyTorch API与MindSpore API的映射关系。
-在MindSpore Dev Toolkit中,提供了API映射搜索和API映射扫描两大功能,且用户可以自由切换API映射数据的版本。
-
-## API映射数据版本切换
-
-1. 插件启动时,默认使用与插件目前版本相同的API映射数据版本。API映射数据版本在右下显示,此版本号仅影响本章节的API映射功能,不会改变环境中的MindSpore版本。
-
- 
-
-2. 点击API映射数据版本,弹出选择列表。可以选择点击预设版本切换至其他版本,也可以选择"other version"输入其他版本号尝试切换。
-
- 
-
-3. 点击任意版本号,开始切换版本。下方有动画提示正在切换的状态。
-
- 
-
-4. 若想自定义输入版本号,在选择列表中选择"other version"的选项,在弹框中输入版本号,点击ok开始切换版本。注:请按照2.1或2.1.0的格式输入版本号,否则点击ok键会没有反应。
-
- 
-
-5. 若切换成功,右下状态栏展示切换后的API映射数据版本信息。
-
- 
-
-6. 若切换失败,右下状态栏展示切换前的API映射数据版本信息。版本号不存在、网络错误会导致切换失败,请排查后再次尝试。如需查看最新文档,可以切换到master版本。
-
- 
-
-7. 当自定义输入的版本号切换成功后,此版本号会加入到版本列表中展示。
-
- 
-
diff --git a/docs/devtoolkit/docs/source_zh_cn/PyCharm_plugin_install.md b/docs/devtoolkit/docs/source_zh_cn/PyCharm_plugin_install.md
deleted file mode 100644
index 9622c7c3159ed08594dc123af9bffaf56fafc354..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_zh_cn/PyCharm_plugin_install.md
+++ /dev/null
@@ -1,13 +0,0 @@
-# PyCharm 插件安装
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/PyCharm_plugin_install.md)
-
-## 安装步骤
-
-1. 获取[插件Zip包](https://ms-release.obs.cn-north-4.myhuaweicloud.com/2.1.0/IdePlugin/any/MindSpore_Dev_ToolKit-2.1.0.zip)。
-2. 启动Pycharm,单击左上菜单栏,选择File->Settings->Plugins->Install Plugin from Disk。
- 如图:
-
- 
-
-3. 选择插件zip包。
\ No newline at end of file
diff --git a/docs/devtoolkit/docs/source_zh_cn/VSCode_api_scan.md b/docs/devtoolkit/docs/source_zh_cn/VSCode_api_scan.md
deleted file mode 100644
index b639ac7c0951b3011232d5723c9cb27c65e12e35..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_zh_cn/VSCode_api_scan.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# API映射 - API扫描
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/VSCode_api_scan.md)
-
-## 功能介绍
-
-* 快速扫描代码中出现的API,在侧边栏直接展示API详情。
-* 为方便其他机器学习框架用户,通过扫描代码中出现的主流框架API,联想匹配对应MindSpore API。
-* API映射的数据版本支持切换,详情请参考[API映射-版本切换](https://www.mindspore.cn/devtoolkit/docs/zh-CN/master/VSCode_change_version.html)章节。
-
-## 文件级API映射扫描
-
-1. 在当前文件任意位置处右键,打开菜单,选择“扫描本地文件”。
-
- 
-
-2. 右边栏会弹出当前文件中扫描出的算子,包括“可以转化的PyTorch API”、“可能是torch.Tensor API的结果”、“暂未提供直接映射关系的PyTorch API”三种扫描结果列表。
-
- 其中:
-
- * "可以转换的PyTorch API"指在文件中被使用的且可以转换为MindSpore API的PyTorch API
- * "可能是torch.Tensor API"指名字和torch.Tensor的API名字相同,可能是torch.Tensor的API且可以转换为MindSpore API的API
- * "暂未提供直接映射关系的PyTorch API"指虽然是PyTorch API或可能是torch.Tensor的API,但是暂时没有直接对应为MindSpore API的API
-
- 
-
-## 项目级API映射扫描
-
-1. 点击Visual Studio Code左侧边栏MindSpore API映射扫描图标。
-
- 
-
-2. 左边栏会生成当前IDE工程中仅含Python文件的工程树视图。
-
- 
-
-3. 若选择视图中单个Python文件,可获取该文件的算子扫描结果列表。
-
- 
-
-4. 若选择视图中文件目录,可获取该目录下所有Python文件的算子扫描结果列表。
-
- 
-
-5. 蓝色字体部分均可以点击,会自动在用户默认浏览器中打开网页。
-
- 
-
- 
\ No newline at end of file
diff --git a/docs/devtoolkit/docs/source_zh_cn/VSCode_api_search.md b/docs/devtoolkit/docs/source_zh_cn/VSCode_api_search.md
deleted file mode 100644
index e57ed777648e995b432f9a96ec58bc82a64502c3..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_zh_cn/VSCode_api_search.md
+++ /dev/null
@@ -1,29 +0,0 @@
-# API映射 - API搜索
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/VSCode_api_search.md)
-
-## 功能介绍
-
-* 快速搜索MindSpore API,在侧边栏直接展示API详情。
-* 为方便其他机器学习框架用户,通过搜索其他主流框架API,联想匹配对应MindSpore API。
-* API映射的数据版本支持切换,详情请参考[API映射-版本切换](https://www.mindspore.cn/devtoolkit/docs/zh-CN/master/VSCode_change_version.html)章节。
-
-## 使用步骤
-
-1. 点击Visual Studio Code左侧边栏MindSpore API映射搜索图标。
-
- 
-
-2. 左侧边栏会生成一个输入框。
-
- 
-
-3. 在输入框中输入任意单词,下方会展示出当前关键词的搜索结果,且搜索结果根据输入内容实时更新。
-
- 
-
-4. 点击任意搜索结果,会在用户默认浏览器中打开网页。
-
- 
-
- 
diff --git a/docs/devtoolkit/docs/source_zh_cn/VSCode_change_version.md b/docs/devtoolkit/docs/source_zh_cn/VSCode_change_version.md
deleted file mode 100644
index 2822c877f55f06ea1c2927f850fb32582cc9dc2c..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_zh_cn/VSCode_change_version.md
+++ /dev/null
@@ -1,39 +0,0 @@
-# API映射 - 版本切换
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/VSCode_change_version.md)
-
-## 简介
-
-API 映射指PyTorch API与MindSpore API的映射关系。在MindSpore Dev Toolkit中,提供了API映射搜索和API映射扫描两大功能,且用户可以自由切换API映射数据的版本。
-
-## API映射数据版本切换
-
-1. 不同版本的API映射数据会导致不同的API映射扫描和API映射搜索结果,但不会影响环境中的MindSpore版本。默认版本与插件版本一致,版本信息会展示在左下角状态栏。
-
- 
-
-2. 点击此状态栏,页面上方会弹出下拉框,包含了默认可以切换的版本号选项。用户可以点击任意版本号切换版本,或者点击”自定义输入“的选项以后,在再次弹出的输入框中输入其他版本号切换版本。
-
- 
-
-3. 点击任意版本号,开始切换版本,左下角状态栏提示版本切换中的状态。
-
- 
-
-4. 若想自定义输入版本号,在下拉框时点击“自定义输入”的选项,下拉框转变为输入框,按照2.1或2.1.0的格式输入版本号,按回车键开始切换版本,左下角状态栏提示切换中的状态。
-
- 
-
- 
-
-5. 若切换成功,右下角信息提示切换成功,左下角状态栏展示切换后的API映射数据版本信息。
-
- 
-
-6. 若切换失败,右下角信息提示切换失败,左下角状态栏展示切换前的API映射数据版本信息。版本号不存在、网络错误会导致切换失败,请排查后再次尝试。如需查看最新文档,可以切换到master版本。
-
- 
-
-7. 当自定义输入的版本号切换成功后,此版本号会加入到下拉框中展示。
-
- 
\ No newline at end of file
diff --git a/docs/devtoolkit/docs/source_zh_cn/VSCode_plugin_install.md b/docs/devtoolkit/docs/source_zh_cn/VSCode_plugin_install.md
deleted file mode 100644
index 14ee27f84e661910b7db7959533b337050d0adf3..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_zh_cn/VSCode_plugin_install.md
+++ /dev/null
@@ -1,18 +0,0 @@
-# Visual Studio Code 插件安装
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/VSCode_plugin_install.md)
-
-## 安装步骤
-
-1. 获取[插件vsix包](https://ms-release.obs.cn-north-4.myhuaweicloud.com/2.1.0/IdePlugin/any/mindspore-dev-toolkit-2.1.0.vsix)。
-2. 点击左侧第五个按钮“Extensions”,点击右上角三个点,再点击“Install from VSIX...”
-
- 
-
-3. 从文件夹中选择下载好的vsix文件,插件自动开始安装。右下角提示"Completed installing MindSpore Dev Toolkit extension from VSIX",则插件安装成功。
-
- 
-
-4. 点击左边栏的刷新按钮,能看到”INSTALLED“目录中有”MindSpore Dev Toolkit"插件,至此插件安装成功。
-
- 
\ No newline at end of file
diff --git a/docs/devtoolkit/docs/source_zh_cn/VSCode_smart_completion.md b/docs/devtoolkit/docs/source_zh_cn/VSCode_smart_completion.md
deleted file mode 100644
index 80634d7f3c87c03e78e3f85f4b3e5bac03ca3b24..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_zh_cn/VSCode_smart_completion.md
+++ /dev/null
@@ -1,22 +0,0 @@
-# 代码补全
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/VSCode_smart_completion.md)
-
-## 功能介绍
-
-* 提供基于MindSpore项目的AI代码补全。
-* 无需安装MindSpore环境,也可轻松开发MindSpore。
-
-## 使用步骤
-
-1. 第一次安装或使用插件时,会自动下载模型,右下角出现"开始下载Model","下载Model成功"提示则表示模型下载且启动成功。若网速较慢,模型需要花费十余分钟下载。下载完成后才会出现"下载Model成功"的字样。若非第一次使用,将不会出现提示。
-
- 
-
-2. 打开Python文件编写代码。
-
- 
-
-3. 编码时,补全会自动生效。有MindSpore Dev Toolkit后缀名称的为此插件智能补全提供的代码。
-
- 
\ No newline at end of file
diff --git a/docs/devtoolkit/docs/source_zh_cn/api_scanning.md b/docs/devtoolkit/docs/source_zh_cn/api_scanning.md
deleted file mode 100644
index f425da32cad074e934b0bf199184ac67d20368ce..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_zh_cn/api_scanning.md
+++ /dev/null
@@ -1,62 +0,0 @@
-# API映射 - API扫描
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/api_scanning.md)
-
-## 功能介绍
-
-* 快速扫描代码中出现的API,在侧边栏直接展示API详情。
-* 为方便其他机器学习框架用户,通过扫描代码中出现的主流框架API,联想匹配对应MindSpore API。
-* API映射的数据版本支持切换,详情请参考[API映射-版本切换](https://www.mindspore.cn/devtoolkit/docs/zh-CN/master/PyCharm_change_version.html)章节。
-
-## 使用步骤
-
-### 文件级别API扫描
-
-1. 在当前文件任意位置处点击鼠标右键,打开菜单,点击菜单最上方的"API scan"。
-
- 
-
-2. 右边栏会自动弹出,展示扫描出的API,并展示包含名称,网址等信息的详细列表。若本文件中未扫描到API,则不会弹出窗口。
-
- 其中:
-
- * "可以转换为MindSpore API的PyTorch/TensorFlow API"指在文件中被使用的且可以转换为MindSpore API的PyTorch或TensorFlow API
- * "暂时不能转换的API"指虽然是PyTorch或TensorFlow API的API,但是暂时没有直接对应为MindSpore API的API
- * "可能是PyTorch/TensorFlow API的情况"指因为链式调用的原因,有可能是PyTorch或TensorFlow的API的可转换情况
- * TensorFlow API扫描是实验性功能
-
- 
-
-3. 蓝色字体的部分均可以点击,会自动在上方再打开一栏,展示网页。
-
- 
-
-4. 点击右上角"导出"按钮,可将内容导出到csv表格。
-
- 
-
-### 项目级别API扫描
-
-1. 在当前文件任意位置处点击鼠标右键,打开菜单,点击菜单上方第二个"API scan project-level",或在上方工具栏选择"Tools",再选择"API scan project-level"。
-
- 
-
- 
-
-2. 右边栏会弹出整个项目中扫描出的API,并展示包含名称,网址等信息的详细列表。
-
- 
-
-3. 在上方框中可以选择单个文件,下方框中将单独展示此文件中的API,文件选择可以随意切换。
-
- 
-
- 
-
-4. 蓝色字体部分均可以点击,会自动在上方再打开一栏,展示网页。
-
- 
-
-5. 点击"导出"按钮,可将内容导出到csv表格。
-
- 
diff --git a/docs/devtoolkit/docs/source_zh_cn/api_search.md b/docs/devtoolkit/docs/source_zh_cn/api_search.md
deleted file mode 100644
index 948e7d70cc9131ce71b3571232d025bce5c70b09..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_zh_cn/api_search.md
+++ /dev/null
@@ -1,29 +0,0 @@
-# API映射 - API互搜
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/api_search.md)
-
-## 功能介绍
-
-* 快速搜索MindSpore API,在侧边栏直接展示API详情。
-* 为方便其他机器学习框架用户,通过搜索其他主流框架API,联想匹配对应MindSpore API。
-* API映射的数据版本支持切换,详情请参考[API映射-版本切换](https://www.mindspore.cn/devtoolkit/docs/zh-CN/master/PyCharm_change_version.html)章节。
-
-## 使用步骤
-
-1. 双击shift弹出全局搜索页面。
-
- 
-
-2. 选择MindSpore。
-
- 
-
-3. 输入要搜索的PyTorch或TensorFlow的API,获取与MindSpore API的对应关系列表。
-
- 
-
- 
-
-4. 点击列表中的条目,可以在右边侧边栏浏览对应条目的官网文档。
-
- 
diff --git a/docs/devtoolkit/docs/source_zh_cn/compiling.md b/docs/devtoolkit/docs/source_zh_cn/compiling.md
deleted file mode 100644
index ce30f02e48dbe63d091a3d4d99ee64dddad2b5e7..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_zh_cn/compiling.md
+++ /dev/null
@@ -1,86 +0,0 @@
-# 源码编译指导
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/compiling.md)
-
-本文档介绍如何基于IntelliJ IDEA源码编译MindSpore Dev ToolKit项目。
-
-## 背景介绍
-
-* MindSpore Dev ToolKit是一个PyCharm插件,需使用IntelliJ IDEA开发。[IntelliJ IDEA](https://www.jetbrains.com/idea/download)与Pycharm均为JetBrains公司开发的IDE。
-* MindSpore Dev ToolKit 基于JDK 11开发。 如果您不了解JDK,请访问[https://jdk.java.net/](https://jdk.java.net/)了解并学习JDK以及java的相关知识。
-* MindSpore Dev ToolKit使用[Gradle](https://gradle.org)6.6.1构建,但无需提前安装。IntelliJ IDEA会自动根据代码使用"gradle wrapper"机制配置好所需的gradle。
-
-## 依赖软件
-
-* 确认安装[IntelliJ IDEA](https://www.jetbrains.com/idea/download)。
-
-* 确认安装JDK 11版本。
- 注:2021.3版本的IntelliJ IDEA自带一个名为jbr-11 JetBrains Runtime version 11.0.10的JDK,可以直接使用。
-
- 
-
-## 编译
-
-1. 保证依赖软件均已成功配置。
-
-2. 从代码仓下载[本项目](https://gitee.com/mindspore/ide-plugin)源码。
-
- * 直接下载代码的zip包。
-
- 
-
- * 使用git下载。
-
- ```
- git clone https://gitee.com/mindspore/ide-plugin.git
- ```
-
-3. 使用IntelliJ IDEA打开项目。
-
- 3.1 选择File选项卡下的Open选项。***File -> Open***
-
- 
-
- 3.2 打开下载项目文件位置。
-
- 
-
- 3.3 点击右下角弹窗中的load或右键pycharm/settings.gradle文件选中Link Gradle Project。
-
- 
-
- 
-
-4. 如果提示没有JDK,请选择一个JDK。***有JDK可以跳过此步骤***
-
- 4.1 没有JDK情况下,页面如下图显示。
-
- 
-
- 4.2 File->Project Structure。
-
- 
-
- 4.3 选择JDK11。
-
- 
-
-5. 等待同步完成。
-
- 
-
-6. 构建项目。
-
- 
-
-7. 构建完成。
-
- 
-
-8. 构建完成后至项目目录下/pycharm/build/distributions目录下获取插件安装包。
-
- 
-
-## 相关参考文档
-
-* 本项目构建基于IntelliJ Platform Plugin SDK之[Building Plugins with Gradle](https://plugins.jetbrains.com/docs/intellij/gradle-build-system.html)章节。如需了解调试等进阶功能,请阅读官方文档。
\ No newline at end of file
diff --git a/docs/devtoolkit/docs/source_zh_cn/conf.py b/docs/devtoolkit/docs/source_zh_cn/conf.py
deleted file mode 100644
index edcae7146aa7df7da99445745b5b2d269f55f9d6..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_zh_cn/conf.py
+++ /dev/null
@@ -1,89 +0,0 @@
-# Configuration file for the Sphinx documentation builder.
-#
-# This file only contains a selection of the most common options. For a full
-# list see the documentation:
-# https://www.sphinx-doc.org/en/master/usage/configuration.html
-
-# -- Path setup --------------------------------------------------------------
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-#
-import os
-import re
-
-# -- Project information -----------------------------------------------------
-
-project = 'MindSpore'
-copyright = 'MindSpore'
-author = 'MindSpore'
-
-# The full version, including alpha/beta/rc tags
-release = 'master'
-
-
-# -- General configuration ---------------------------------------------------
-
-# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
-# ones.
-myst_enable_extensions = ["dollarmath", "amsmath"]
-
-
-myst_heading_anchors = 5
-extensions = [
- 'myst_parser',
- 'sphinx.ext.autodoc'
-]
-
-source_suffix = {
- '.rst': 'restructuredtext',
- '.md': 'markdown',
-}
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = ['_templates']
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-# This pattern also affects html_static_path and html_extra_path.
-mathjax_path = 'https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/mathjax/MathJax-3.2.2/es5/tex-mml-chtml.js'
-
-mathjax_options = {
- 'async':'async'
-}
-
-smartquotes_action = 'De'
-
-exclude_patterns = []
-
-pygments_style = 'sphinx'
-
-# -- Options for HTML output -------------------------------------------------
-
-language = 'zh_CN'
-locale_dirs = ['../../../../resource/locale/']
-gettext_compact = False
-
-# The theme to use for HTML and HTML Help pages. See the documentation for
-# a list of builtin themes.
-#
-html_theme = 'sphinx_rtd_theme'
-
-html_search_options = {'dict': '../../../resource/jieba.txt'}
-
-html_static_path = ['_static']
-
-src_release = os.path.join(os.getenv("DT_PATH"), 'RELEASE_CN.md')
-des_release = "./RELEASE.md"
-with open(src_release, "r", encoding="utf-8") as f:
- data = f.read()
-if len(re.findall("\n## (.*?)\n",data)) > 1:
- content = re.findall("(## [\s\S\n]*?)\n## ", data)
-else:
- content = re.findall("(## [\s\S\n]*)", data)
-#result = content[0].replace('# MindSpore', '#', 1)
-with open(des_release, "w", encoding="utf-8") as p:
- p.write("# Release Notes"+"\n\n")
- p.write(content[0])
\ No newline at end of file
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image002.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image002.jpg
deleted file mode 100644
index 24132302f1552bed6be56b7dd660625448774680..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image002.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image004.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image004.jpg
deleted file mode 100644
index 7ed0e3729940f514a7bfd61c1a7be22166c0bb02..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image004.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image006.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image006.jpg
deleted file mode 100644
index e0c323eec249024fe19126ce4c931133564cf7b7..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image006.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image008.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image008.jpg
deleted file mode 100644
index a071ad67222931372d4b62f7b0cf334a4015e70d..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image008.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image010.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image010.jpg
deleted file mode 100644
index 43ca88d40bc56d5a5113bc29b97a7b559ef659af..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image010.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image012.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image012.jpg
deleted file mode 100644
index 0e35c9f219292913a51f1f0d5b7a5e154008620f..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image012.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image014.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image014.jpg
deleted file mode 100644
index 794de60c7d7e76a58e8d7212e449a1bd8e194b21..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image014.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image015.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image015.jpg
deleted file mode 100644
index 8172e21f871bed6866b3a91b83252838228d257a..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image015.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image016.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image016.jpg
deleted file mode 100644
index c836c0cf7898e4757ddb3410dde18e754894dd25..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image016.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image018.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image018.jpg
deleted file mode 100644
index 777738f7cf60454b7b5f26e6c5a29ba5d55750ff..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image018.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image019.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image019.jpg
deleted file mode 100644
index ab02b702bfd1c0986adb4a15c1f455b56df0a4a1..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image019.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image020.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image020.jpg
deleted file mode 100644
index d946c3cb3a851f690a5643b5afe119597aed5b22..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image020.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image021.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image021.jpg
deleted file mode 100644
index 74672d9513f4a60f77450ae5516cee2060215241..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image021.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image022.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image022.jpg
deleted file mode 100644
index 6b26f18c7d8bb43db0beb8a8d2bd386489192922..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image022.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image023.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image023.jpg
deleted file mode 100644
index 5981a0fb25c681417a0bdf24d2392411b41f0faf..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image023.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image024.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image024.jpg
deleted file mode 100644
index 505e8b6c5c4d81dfd67d91b1dad08f938444368a..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image024.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image025.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image025.jpg
deleted file mode 100644
index 946e276b6a982303b6b146266037a739e6c2639a..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image025.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image026.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image026.jpg
deleted file mode 100644
index 8b787215af5cf9ae223c2b33121c3171923d2de2..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image026.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image027.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image027.jpg
deleted file mode 100644
index aa4d7d4a8a6b503fe29885368547daa535e34796..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image027.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image028.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image028.jpg
deleted file mode 100644
index 3126f80aecac28e8beaa54dc393122c60dbe1357..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image028.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image029.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image029.jpg
deleted file mode 100644
index 6587240e4a456f3792fece52bcdcbbed077ca67b..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image029.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image031.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image031.jpg
deleted file mode 100644
index 2f829b48e72e62525860cfe599e0a4ada82010ca..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image031.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image032.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image032.jpg
deleted file mode 100644
index 37589efbe0f57c442f665824831a2685d81c8713..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image032.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image033.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image033.jpg
deleted file mode 100644
index bdca68324cf7ee8f4e9bd18817a82954910e52c9..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image033.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image034.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image034.jpg
deleted file mode 100644
index 874b10d4b2ca476da16a4d1e749cdb6b31ecb59e..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image034.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image035.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image035.jpg
deleted file mode 100644
index 0b0465169553e57795320255295b8fa789950522..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image035.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image036.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image036.jpg
deleted file mode 100644
index c7c6c72819b655884d97637b696d1814e5a7fdbf..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image036.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image037.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image037.jpg
deleted file mode 100644
index 531e8184e02c43aa177a51c3cc32355cc3df9d42..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image037.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image038.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image038.jpg
deleted file mode 100644
index a8b4d88190c626139bad49cd42a9f7e908b4d0e4..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image038.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image039.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image039.jpg
deleted file mode 100644
index 2eab0ceac9c1bd5d8b6ade3d65a6a3ce8b1f8fd4..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image039.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image040.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image040.jpg
deleted file mode 100644
index a879fb1f12d8b6c4bd02332abf9b3bd734207763..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image040.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image042.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image042.jpg
deleted file mode 100644
index 2454ade258da6d428c9e23ece2adf7f0291d1a12..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image042.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image044.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image044.jpg
deleted file mode 100644
index cbff652015c36a5856afc909518f3c0fd22f23ff..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image044.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image046.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image046.jpg
deleted file mode 100644
index 58a493ea4f69b264fc69cfd3e34f32d5a171c303..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image046.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image050.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image050.jpg
deleted file mode 100644
index 35cc26d483358550c9a53ce855c2ae483eddb7e1..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image050.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image060.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image060.jpg
deleted file mode 100644
index 7723975694f7f56d88187a69626343af11efbd23..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image060.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image062.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image062.jpg
deleted file mode 100644
index 838bc48ab8d77f7dbba9ca02925838a49b19ce53..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image062.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image064.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image064.jpg
deleted file mode 100644
index fb39e70b78b45af301973ea802a219c482a21590..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image064.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image066.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image066.jpg
deleted file mode 100644
index 0a596cfb3ef7a79674ff33a7be5c97859cc2b9c4..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image066.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image068.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image068.jpg
deleted file mode 100644
index 0023ba9236a768001e462d6a10434719f3f733fd..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image068.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image072.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image072.jpg
deleted file mode 100644
index d1e5fad4192d4cb5821cafe4031dbc4ff599eccd..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image072.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image074.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image074.jpg
deleted file mode 100644
index 97fa2b21b4029ff75156893f8abdff2e77aa38bd..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image074.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image076.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image076.jpg
deleted file mode 100644
index e754c7dcd30ee6fa82ab20bde4d66f69aabe2fa7..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image076.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image088.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image088.jpg
deleted file mode 100644
index 8b85f0727893c4cf6cd258550466ca4f4a340e6e..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image088.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image090.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image090.jpg
deleted file mode 100644
index a3f405388fd75b23b652bc86475be5fd5e1f48ac..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image090.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image092.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image092.jpg
deleted file mode 100644
index 68ca9c66fc3f03760873075af20c8a9e28aaab48..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image092.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image093.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image093.jpg
deleted file mode 100644
index 594b2ceadb2c27290e0339e14b298fa2feffe6a9..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image093.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image094.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image094.jpg
deleted file mode 100644
index e931a95180d27d55590e73948ebe80a1f81bede1..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image094.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image096.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image096.jpg
deleted file mode 100644
index 3ed0c88500bb4caffccea4d08aaa3a6310e177bd..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image096.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image097.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image097.jpg
deleted file mode 100644
index 0cb303bea0e9e88bf56fe22806c91605f1822606..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image097.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image100.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image100.jpg
deleted file mode 100644
index 7dd66fa814e1dcc67b30e41e05ff2d36a2cae2a8..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image100.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image101.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image101.jpg
deleted file mode 100644
index 656ec720e30e09c72ea2c61c9caad41282a7f923..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image101.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image102.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image102.jpg
deleted file mode 100644
index 973c9940bc8e72ed2355026c98f9885f30096ba4..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image102.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image103.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image103.jpg
deleted file mode 100644
index e945dc73adf0b3755bbed6f54b8f6255d7d8f3fc..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image103.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image104.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image104.jpg
deleted file mode 100644
index 6a20059ff34b48f657c7d7d998597c7f1332d220..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image104.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image105.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image105.jpg
deleted file mode 100644
index 62606f6b5b4cc9eabea35e79f2da9dae45b91c29..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image105.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image106.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image106.jpg
deleted file mode 100644
index 60a0c13f748cf1265ee43a766a86041fc1abe7c6..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image106.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image107.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image107.jpg
deleted file mode 100644
index 19e59bb533d230c64fd12b2a0f4abb5156428695..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image107.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image108.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image108.jpg
deleted file mode 100644
index 14bfeaaf6bc7416d4b53aeb85dda5883dfb22aa9..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image108.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image109.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image109.jpg
deleted file mode 100644
index 3a155082deeab5246182f3fceb28cc1b65e709e1..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image109.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image110.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image110.jpg
deleted file mode 100644
index dee5a5107e59244dc24c1a41a928b3aaf705b052..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image110.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image111.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image111.jpg
deleted file mode 100644
index 6324111bd6e2bf8f9f62429058b4e3f5fd5b36b9..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image111.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image112.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image112.jpg
deleted file mode 100644
index c71a61026a48cd7f1732b4b9d41c00d0c03bc521..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image112.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image113.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image113.jpg
deleted file mode 100644
index d44ede801205f4bf4981cf751ca49f5df3324c2e..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image113.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image114.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image114.jpg
deleted file mode 100644
index 43dbe99fc91ee7907928e9dabe5e50b1fd202fef..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image114.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image115.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image115.jpg
deleted file mode 100644
index e676224e6be68f2b770d8effac56ab5b3f433e99..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image115.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image116.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image116.jpg
deleted file mode 100644
index 3c0569c618cb45d19783b5034209050ad1dee716..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image116.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image117.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image117.jpg
deleted file mode 100644
index 1f3d079ed853fef95ce56f258b489d31117da8cf..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image117.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image118.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image118.jpg
deleted file mode 100644
index 2729fbe6133df67d4fab7438244182ba836f5908..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image118.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image119.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image119.jpg
deleted file mode 100644
index 44fd8b841548900ab6ddc598b2ad0124d8341864..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image119.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image120.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image120.jpg
deleted file mode 100644
index 19d88f70e7675b110b582164e43ad4f60419c7be..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image120.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image121.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image121.jpg
deleted file mode 100644
index 4f997ac892b2ce7d1daecc358afabcccec0d04be..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image121.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image122.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image122.jpg
deleted file mode 100644
index 9a16de514b7ec68c579cc02ec680df3db1292746..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image122.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image123.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image123.jpg
deleted file mode 100644
index f5f4a82d076f0b22b2e85d139c2ac5b6572bb571..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image123.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image124.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image124.jpg
deleted file mode 100644
index e835a4b22a7032069e7c2edb6eda1b012a15671f..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image124.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image125.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image125.jpg
deleted file mode 100644
index 3b779d9ba44f054c3673a761417035a86f97db06..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image125.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image126.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image126.jpg
deleted file mode 100644
index 93f72bd16af2289bdfdd24120acadb3506f37ab5..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image126.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image127.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image127.jpg
deleted file mode 100644
index 09787b77310e4dabb74f48d154168c67204f2243..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image127.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image128.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image128.jpg
deleted file mode 100644
index 074ab2fc864e572f1a4c59511316466ec72927e0..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image128.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image129.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image129.jpg
deleted file mode 100644
index c144eb5cfdf77caa18567a909b27c55880960a08..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image129.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image130.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image130.jpg
deleted file mode 100644
index a88fdddd286bd9b2eb6773734227471f5f2dd655..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image130.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image131.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image131.jpg
deleted file mode 100644
index c51c4fdf2370cb292a811dce8b467ded6182d86c..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image131.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image132.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image132.jpg
deleted file mode 100644
index 085df85bf7f3f65fb1e21e135cac3ccb18cbe3e0..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image132.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image133.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image133.jpg
deleted file mode 100644
index dbd04ac802204eb9327552431a1f5b3fe213f10b..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image133.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image134.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image134.jpg
deleted file mode 100644
index 2de14584f45993655ef626b7a76e0b22cab2eeb8..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image134.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image135.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image135.jpg
deleted file mode 100644
index aaa7212f107ca3d81470328b37f25e4ba36cf199..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image135.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image136.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image136.jpg
deleted file mode 100644
index 3aa85416624254075ae967ff13178df1922ed1b4..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image136.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image137.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image137.jpg
deleted file mode 100644
index 5db1e281048f80b4e82cb7d375c4fa08729f4ea7..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image137.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image138.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image138.jpg
deleted file mode 100644
index 4884213affb1aee8b54131690cf5042293803eb2..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image138.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image139.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image139.jpg
deleted file mode 100644
index 0173dc81f0ba01639ef60397d45185323fe84440..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image139.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image140.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image140.jpg
deleted file mode 100644
index 900204ac42a37068c1292b4779a08336e84a88ff..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image140.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image141.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image141.jpg
deleted file mode 100644
index 49d46b7bbb3ff297f7975c5f96ccfa42b0c3fdc1..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image141.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image142.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image142.jpg
deleted file mode 100644
index 12a49c84869560400bc3859aef3b80cea3bd7722..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image142.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image143.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image143.jpg
deleted file mode 100644
index 7a2e5f268171c29cfc84c6f9748f5ebe3a7ef399..0000000000000000000000000000000000000000
Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image143.jpg and /dev/null differ
diff --git a/docs/devtoolkit/docs/source_zh_cn/index.rst b/docs/devtoolkit/docs/source_zh_cn/index.rst
deleted file mode 100644
index 55cf86cc6f52ce0661e93ad1c642bc37e009e81a..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_zh_cn/index.rst
+++ /dev/null
@@ -1,61 +0,0 @@
-MindSpore Dev Toolkit文档
-============================
-
-MindSpore Dev Toolkit是一款支持MindSpore开发的 `PyCharm `_ (多平台Python IDE)插件,提供 `创建项目 `_ 、 `智能补全 `_ 、 `API互搜 `_ 和 `文档搜索 `_ 等功能。
-
-MindSpore Dev Toolkit通过深度学习、智能搜索及智能推荐等技术,打造智能计算最佳体验,致力于全面提升MindSpore框架的易用性,助力MindSpore生态推广。
-
-代码仓地址:
-
-系统需求
-------------------------------
-
-- 插件支持的操作系统:
-
- - Windows 10
-
- - Linux
-
- - MacOS(仅支持x86架构,补全功能暂未上线)
-
-- 插件支持的PyCharm版本:
-
- - 2020.3
-
- - 2021.X
-
- - 2022.x
-
-.. toctree::
- :glob:
- :maxdepth: 1
- :caption: PyCharm插件使用指南
- :hidden:
-
- PyCharm_plugin_install
- compiling
- smart_completion
- PyCharm_change_version
- api_search
- api_scanning
- knowledge_search
- mindspore_project_wizard
-
-.. toctree::
- :glob:
- :maxdepth: 1
- :caption: VSCode插件使用指南
- :hidden:
-
- VSCode_plugin_install
- VSCode_smart_completion
- VSCode_change_version
- VSCode_api_search
- VSCode_api_scan
-
-.. toctree::
- :glob:
- :maxdepth: 1
- :caption: RELEASE NOTES
-
- RELEASE
\ No newline at end of file
diff --git a/docs/devtoolkit/docs/source_zh_cn/knowledge_search.md b/docs/devtoolkit/docs/source_zh_cn/knowledge_search.md
deleted file mode 100644
index 6ba1068fafc6861143763f2a30a347bad4ed3ced..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_zh_cn/knowledge_search.md
+++ /dev/null
@@ -1,22 +0,0 @@
-# 智能知识搜索
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/knowledge_search.md)
-
-## 功能介绍
-
-* 定向推荐:根据用户使用习惯,提供更精准的搜索结果。
-* 沉浸式资料检索体验,避免在IDE和浏览器之间的互相切换。适配侧边栏,提供窄屏适配界面。
-
-## 使用步骤
-
-1. 打开侧边栏,显示搜索主页。
-
- 
-
-2. 输入"api映射",点击搜索,查看结果。
-
- 
-
-3. 点击home按钮回到主页。
-
- 
\ No newline at end of file
diff --git a/docs/devtoolkit/docs/source_zh_cn/mindspore_project_wizard.md b/docs/devtoolkit/docs/source_zh_cn/mindspore_project_wizard.md
deleted file mode 100644
index ca5972171d1a96375155f6aef3b094cbf73ead13..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_zh_cn/mindspore_project_wizard.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# 创建项目
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/mindspore_project_wizard.md)
-
-## 技术背景
-
-本功能的实现基于[conda](https://conda.io)。Conda是一个包管理和环境管理系统,是MindSpore推荐的安装方式之一。
-
-## 功能介绍
-
-* 创建conda环境或选择已有conda环境,并安装MindSpore二进制包至conda环境。
-* 部署最佳实践模版。不仅可以测试环境是否安装成功,对新用户也提供了一个MindSpore的入门介绍。
-* 在网络状况良好时,10分钟之内即可完成环境安装,开始体验MindSpore。最大可节约新用户80%的环境配置时间。
-
-## 使用步骤
-
-1. 选择**File** > **New Project**。
-
- 
-
-2. 选择**MindSpore**。
-
- 
-
-3. Miniconda下载安装。***已经安装过conda的可以跳过此步骤。***
-
- 3.1 点击Install Miniconda Automatically按钮。
-
- 
-
- 3.2 选择下载安装文件夹。**建议不修改路径,使用默认路径安装conda。**
-
- 
-
- 3.3 点击**Install**按钮,等待下载安装。
-
- 
-
- 
-
- 3.4 Miniconda下载安装完成。
-
- 
-
- 3.5 根据提示重新启动PyCharm或者稍后自行重新启动PyCharm。***注意:接下来的步骤必须重启PyCharm后方可继续***
-
- 
-
-4. 确认Conda executable路径已正确填充。 如果Conda executable没有自动填充,点击文件夹按钮,选择本地已安装的conda的路径。
-
- 
-
-5. 创建或选择已有的conda环境。
-
- * 创建新的conda环境。 **建议不修改路径,使用默认路径创建conda环境。由于PyCharm限制,Linux系统下暂时无法选择默认目录以外的地址。**
-
- 
-
- * 选择PyCharm中已有的conda环境。
-
- 
-
-6. 选择硬件环境和MindSpore项目最佳实践模板。
-
- 6.1 选择硬件环境。
-
- 
-
- 6.2 选择最佳实践模板。最佳实践模版是MindSpore提供一些样例项目,以供新用户熟悉MindSpore。最佳实践模版可以直接运行。
-
- 
-
-7. 点击**Create**按钮新建项目,等待MindSpore下载安装成功。
-
- 7.1 点击**Create**按钮创建MindSpore新项目。
-
- 
-
- 7.2 正在创建创建conda环境。
-
- 
-
- 7.3 正在通过conda配置MindSpore。
-
- 
-
-8. 创建MindSpore项目完成。
-
- 
-
-9. 验证MindSpore项目是否创建成功。
-
- * 点击下方Terminal,输入 python -c "import mindspore;mindspore.run_check()" ,查看输出。如下图,显示了版本号等,表示MindSpore环境可用。
-
- 
-
- * 如果选择了最佳实践模版,可以通过运行最佳实践,测试MindSpore环境。
-
- 
-
- 
-
- 
\ No newline at end of file
diff --git a/docs/devtoolkit/docs/source_zh_cn/smart_completion.md b/docs/devtoolkit/docs/source_zh_cn/smart_completion.md
deleted file mode 100644
index 5e4cf4edd5272715f7af955a697341e51f465eb5..0000000000000000000000000000000000000000
--- a/docs/devtoolkit/docs/source_zh_cn/smart_completion.md
+++ /dev/null
@@ -1,36 +0,0 @@
-# 代码补全
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/smart_completion.md)
-
-## 功能介绍
-
-* 提供基于MindSpore项目的AI代码补全。
-* 无需安装MindSpore环境,也可轻松开发MindSpore。
-
-## 使用步骤
-
-1. 打开Python文件编写代码。
-
- 
-
-2. 编码时,补全会自动生效。有MindSpore图标的条目为MindSpore Dev Toolkit智能补全提供的代码。
-
- 
-
- 
-
-## 备注
-
-1. PyCharm的2021以后版本,会根据机器学习重新排列补全内容。此行为可能导致插件的补全条目排序靠后。可以在设置中停用此功能,使用MindSpore Dev Toolkit提供的排序。
-
- 
-
-2. 关闭此选项前后的对比。
-
- * 关闭后。
-
- 
-
- * 关闭前。
-
- 
\ No newline at end of file
diff --git a/docs/federated/docs/Makefile b/docs/federated/docs/Makefile
deleted file mode 100644
index 1eff8952707bdfa503c8d60c1e9a903053170ba2..0000000000000000000000000000000000000000
--- a/docs/federated/docs/Makefile
+++ /dev/null
@@ -1,20 +0,0 @@
-# Minimal makefile for Sphinx documentation
-#
-
-# You can set these variables from the command line, and also
-# from the environment for the first two.
-SPHINXOPTS ?=
-SPHINXBUILD ?= sphinx-build
-SOURCEDIR = source_zh_cn
-BUILDDIR = build_zh_cn
-
-# Put it first so that "make" without argument is like "make help".
-help:
- @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
-
-.PHONY: help Makefile
-
-# Catch-all target: route all unknown targets to Sphinx using the new
-# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
-%: Makefile
- @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
diff --git a/docs/federated/docs/_ext/overwriteautosummary_generate.txt b/docs/federated/docs/_ext/overwriteautosummary_generate.txt
deleted file mode 100644
index 4b0a1b1dd2b410ecab971b13da9993c90d65ef0d..0000000000000000000000000000000000000000
--- a/docs/federated/docs/_ext/overwriteautosummary_generate.txt
+++ /dev/null
@@ -1,707 +0,0 @@
-"""
- sphinx.ext.autosummary.generate
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- Usable as a library or script to generate automatic RST source files for
- items referred to in autosummary:: directives.
-
- Each generated RST file contains a single auto*:: directive which
- extracts the docstring of the referred item.
-
- Example Makefile rule::
-
- generate:
- sphinx-autogen -o source/generated source/*.rst
-
- :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-import argparse
-import importlib
-import inspect
-import locale
-import os
-import pkgutil
-import pydoc
-import re
-import sys
-import warnings
-from gettext import NullTranslations
-from os import path
-from typing import Any, Dict, List, NamedTuple, Sequence, Set, Tuple, Type, Union
-
-from jinja2 import TemplateNotFound
-from jinja2.sandbox import SandboxedEnvironment
-
-import sphinx.locale
-from sphinx import __display_version__, package_dir
-from sphinx.application import Sphinx
-from sphinx.builders import Builder
-from sphinx.config import Config
-from sphinx.deprecation import RemovedInSphinx50Warning
-from sphinx.ext.autodoc import Documenter
-from sphinx.ext.autodoc.importer import import_module
-from sphinx.ext.autosummary import (ImportExceptionGroup, get_documenter, import_by_name,
- import_ivar_by_name)
-from sphinx.locale import __
-from sphinx.pycode import ModuleAnalyzer, PycodeError
-from sphinx.registry import SphinxComponentRegistry
-from sphinx.util import logging, rst, split_full_qualified_name, get_full_modname
-from sphinx.util.inspect import getall, safe_getattr
-from sphinx.util.osutil import ensuredir
-from sphinx.util.template import SphinxTemplateLoader
-
-logger = logging.getLogger(__name__)
-
-
-class DummyApplication:
- """Dummy Application class for sphinx-autogen command."""
-
- def __init__(self, translator: NullTranslations) -> None:
- self.config = Config()
- self.registry = SphinxComponentRegistry()
- self.messagelog: List[str] = []
- self.srcdir = "/"
- self.translator = translator
- self.verbosity = 0
- self._warncount = 0
- self.warningiserror = False
-
- self.config.add('autosummary_context', {}, True, None)
- self.config.add('autosummary_filename_map', {}, True, None)
- self.config.add('autosummary_ignore_module_all', True, 'env', bool)
- self.config.add('docs_branch', '', True, None)
- self.config.add('branch', '', True, None)
- self.config.add('cst_module_name', '', True, None)
- self.config.add('copy_repo', '', True, None)
- self.config.add('giturl', '', True, None)
- self.config.add('repo_whl', '', True, None)
- self.config.init_values()
-
- def emit_firstresult(self, *args: Any) -> None:
- pass
-
-
-class AutosummaryEntry(NamedTuple):
- name: str
- path: str
- template: str
- recursive: bool
-
-
-def setup_documenters(app: Any) -> None:
- from sphinx.ext.autodoc import (AttributeDocumenter, ClassDocumenter, DataDocumenter,
- DecoratorDocumenter, ExceptionDocumenter,
- FunctionDocumenter, MethodDocumenter, ModuleDocumenter,
- NewTypeAttributeDocumenter, NewTypeDataDocumenter,
- PropertyDocumenter)
- documenters: List[Type[Documenter]] = [
- ModuleDocumenter, ClassDocumenter, ExceptionDocumenter, DataDocumenter,
- FunctionDocumenter, MethodDocumenter, NewTypeAttributeDocumenter,
- NewTypeDataDocumenter, AttributeDocumenter, DecoratorDocumenter, PropertyDocumenter,
- ]
- for documenter in documenters:
- app.registry.add_documenter(documenter.objtype, documenter)
-
-
-def _simple_info(msg: str) -> None:
- warnings.warn('_simple_info() is deprecated.',
- RemovedInSphinx50Warning, stacklevel=2)
- print(msg)
-
-
-def _simple_warn(msg: str) -> None:
- warnings.warn('_simple_warn() is deprecated.',
- RemovedInSphinx50Warning, stacklevel=2)
- print('WARNING: ' + msg, file=sys.stderr)
-
-
-def _underline(title: str, line: str = '=') -> str:
- if '\n' in title:
- raise ValueError('Can only underline single lines')
- return title + '\n' + line * len(title)
-
-
-class AutosummaryRenderer:
- """A helper class for rendering."""
-
- def __init__(self, app: Union[Builder, Sphinx], template_dir: str = None) -> None:
- if isinstance(app, Builder):
- warnings.warn('The first argument for AutosummaryRenderer has been '
- 'changed to Sphinx object',
- RemovedInSphinx50Warning, stacklevel=2)
- if template_dir:
- warnings.warn('template_dir argument for AutosummaryRenderer is deprecated.',
- RemovedInSphinx50Warning, stacklevel=2)
-
- system_templates_path = [os.path.join(package_dir, 'ext', 'autosummary', 'templates')]
- loader = SphinxTemplateLoader(app.srcdir, app.config.templates_path,
- system_templates_path)
-
- self.env = SandboxedEnvironment(loader=loader)
- self.env.filters['escape'] = rst.escape
- self.env.filters['e'] = rst.escape
- self.env.filters['underline'] = _underline
-
- if isinstance(app, (Sphinx, DummyApplication)):
- if app.translator:
- self.env.add_extension("jinja2.ext.i18n")
- self.env.install_gettext_translations(app.translator)
- elif isinstance(app, Builder):
- if app.app.translator:
- self.env.add_extension("jinja2.ext.i18n")
- self.env.install_gettext_translations(app.app.translator)
-
- def exists(self, template_name: str) -> bool:
- """Check if template file exists."""
- warnings.warn('AutosummaryRenderer.exists() is deprecated.',
- RemovedInSphinx50Warning, stacklevel=2)
- try:
- self.env.get_template(template_name)
- return True
- except TemplateNotFound:
- return False
-
- def render(self, template_name: str, context: Dict) -> str:
- """Render a template file."""
- try:
- template = self.env.get_template(template_name)
- except TemplateNotFound:
- try:
- # objtype is given as template_name
- template = self.env.get_template('autosummary/%s.rst' % template_name)
- except TemplateNotFound:
- # fallback to base.rst
- template = self.env.get_template('autosummary/base.rst')
-
- return template.render(context)
-
-
-# -- Generating output ---------------------------------------------------------
-
-
-class ModuleScanner:
- def __init__(self, app: Any, obj: Any) -> None:
- self.app = app
- self.object = obj
-
- def get_object_type(self, name: str, value: Any) -> str:
- return get_documenter(self.app, value, self.object).objtype
-
- def is_skipped(self, name: str, value: Any, objtype: str) -> bool:
- try:
- return self.app.emit_firstresult('autodoc-skip-member', objtype,
- name, value, False, {})
- except Exception as exc:
- logger.warning(__('autosummary: failed to determine %r to be documented, '
- 'the following exception was raised:\n%s'),
- name, exc, type='autosummary')
- return False
-
- def scan(self, imported_members: bool) -> List[str]:
- members = []
- for name in members_of(self.object, self.app.config):
- try:
- value = safe_getattr(self.object, name)
- except AttributeError:
- value = None
-
- objtype = self.get_object_type(name, value)
- if self.is_skipped(name, value, objtype):
- continue
-
- try:
- if inspect.ismodule(value):
- imported = True
- elif safe_getattr(value, '__module__') != self.object.__name__:
- imported = True
- else:
- imported = False
- except AttributeError:
- imported = False
-
- respect_module_all = not self.app.config.autosummary_ignore_module_all
- if imported_members:
- # list all members up
- members.append(name)
- elif imported is False:
- # list not-imported members
- members.append(name)
- elif '__all__' in dir(self.object) and respect_module_all:
- # list members that have __all__ set
- members.append(name)
-
- return members
-
-
-def members_of(obj: Any, conf: Config) -> Sequence[str]:
- """Get the members of ``obj``, possibly ignoring the ``__all__`` module attribute
-
- Follows the ``conf.autosummary_ignore_module_all`` setting."""
-
- if conf.autosummary_ignore_module_all:
- return dir(obj)
- else:
- return getall(obj) or dir(obj)
-
-
-def generate_autosummary_content(name: str, obj: Any, parent: Any,
- template: AutosummaryRenderer, template_name: str,
- imported_members: bool, app: Any,
- recursive: bool, context: Dict,
- modname: str = None, qualname: str = None) -> str:
- doc = get_documenter(app, obj, parent)
-
- def skip_member(obj: Any, name: str, objtype: str) -> bool:
- try:
- return app.emit_firstresult('autodoc-skip-member', objtype, name,
- obj, False, {})
- except Exception as exc:
- logger.warning(__('autosummary: failed to determine %r to be documented, '
- 'the following exception was raised:\n%s'),
- name, exc, type='autosummary')
- return False
-
- def get_class_members(obj: Any) -> Dict[str, Any]:
- members = sphinx.ext.autodoc.get_class_members(obj, [qualname], safe_getattr)
- return {name: member.object for name, member in members.items()}
-
- def get_module_members(obj: Any) -> Dict[str, Any]:
- members = {}
- for name in members_of(obj, app.config):
- try:
- members[name] = safe_getattr(obj, name)
- except AttributeError:
- continue
- return members
-
- def get_all_members(obj: Any) -> Dict[str, Any]:
- if doc.objtype == "module":
- return get_module_members(obj)
- elif doc.objtype == "class":
- return get_class_members(obj)
- return {}
-
- def get_members(obj: Any, types: Set[str], include_public: List[str] = [],
- imported: bool = True) -> Tuple[List[str], List[str]]:
- items: List[str] = []
- public: List[str] = []
-
- all_members = get_all_members(obj)
- for name, value in all_members.items():
- documenter = get_documenter(app, value, obj)
- if documenter.objtype in types:
- # skip imported members if expected
- if imported or getattr(value, '__module__', None) == obj.__name__:
- skipped = skip_member(value, name, documenter.objtype)
- if skipped is True:
- pass
- elif skipped is False:
- # show the member forcedly
- items.append(name)
- public.append(name)
- else:
- items.append(name)
- if name in include_public or not name.startswith('_'):
- # considers member as public
- public.append(name)
- return public, items
-
- def get_module_attrs(members: Any) -> Tuple[List[str], List[str]]:
- """Find module attributes with docstrings."""
- attrs, public = [], []
- try:
- analyzer = ModuleAnalyzer.for_module(name)
- attr_docs = analyzer.find_attr_docs()
- for namespace, attr_name in attr_docs:
- if namespace == '' and attr_name in members:
- attrs.append(attr_name)
- if not attr_name.startswith('_'):
- public.append(attr_name)
- except PycodeError:
- pass # give up if ModuleAnalyzer fails to parse code
- return public, attrs
-
- def get_modules(obj: Any) -> Tuple[List[str], List[str]]:
- items: List[str] = []
- for _, modname, _ispkg in pkgutil.iter_modules(obj.__path__):
- fullname = name + '.' + modname
- try:
- module = import_module(fullname)
- if module and hasattr(module, '__sphinx_mock__'):
- continue
- except ImportError:
- pass
-
- items.append(fullname)
- public = [x for x in items if not x.split('.')[-1].startswith('_')]
- return public, items
-
- ns: Dict[str, Any] = {}
- ns.update(context)
-
- if doc.objtype == 'module':
- scanner = ModuleScanner(app, obj)
- ns['members'] = scanner.scan(imported_members)
- ns['functions'], ns['all_functions'] = \
- get_members(obj, {'function'}, imported=imported_members)
- ns['classes'], ns['all_classes'] = \
- get_members(obj, {'class'}, imported=imported_members)
- ns['exceptions'], ns['all_exceptions'] = \
- get_members(obj, {'exception'}, imported=imported_members)
- ns['attributes'], ns['all_attributes'] = \
- get_module_attrs(ns['members'])
- ispackage = hasattr(obj, '__path__')
- if ispackage and recursive:
- ns['modules'], ns['all_modules'] = get_modules(obj)
- elif doc.objtype == 'class':
- ns['members'] = dir(obj)
- ns['inherited_members'] = \
- set(dir(obj)) - set(obj.__dict__.keys())
- ns['methods'], ns['all_methods'] = \
- get_members(obj, {'method'}, ['__init__'])
- ns['attributes'], ns['all_attributes'] = \
- get_members(obj, {'attribute', 'property'})
-
- if modname is None or qualname is None:
- modname, qualname = split_full_qualified_name(name)
-
- if doc.objtype in ('method', 'attribute', 'property'):
- ns['class'] = qualname.rsplit(".", 1)[0]
-
- if doc.objtype in ('class',):
- shortname = qualname
- else:
- shortname = qualname.rsplit(".", 1)[-1]
-
- ns['fullname'] = name
- ns['module'] = modname
- ns['objname'] = qualname
- ns['name'] = shortname
-
- ns['objtype'] = doc.objtype
- ns['underline'] = len(name) * '='
-
- if template_name:
- return template.render(template_name, ns)
- else:
- return template.render(doc.objtype, ns)
-
-
-def generate_autosummary_docs(sources: List[str], output_dir: str = None,
- suffix: str = '.rst', base_path: str = None,
- builder: Builder = None, template_dir: str = None,
- imported_members: bool = False, app: Any = None,
- overwrite: bool = True, encoding: str = 'utf-8') -> None:
-
- if builder:
- warnings.warn('builder argument for generate_autosummary_docs() is deprecated.',
- RemovedInSphinx50Warning, stacklevel=2)
-
- if template_dir:
- warnings.warn('template_dir argument for generate_autosummary_docs() is deprecated.',
- RemovedInSphinx50Warning, stacklevel=2)
-
- showed_sources = list(sorted(sources))
- if len(showed_sources) > 20:
- showed_sources = showed_sources[:10] + ['...'] + showed_sources[-10:]
- logger.info(__('[autosummary] generating autosummary for: %s') %
- ', '.join(showed_sources))
-
- if output_dir:
- logger.info(__('[autosummary] writing to %s') % output_dir)
-
- if base_path is not None:
- sources = [os.path.join(base_path, filename) for filename in sources]
-
- template = AutosummaryRenderer(app)
-
- # read
- items = find_autosummary_in_files(sources)
-
- # keep track of new files
- new_files = []
-
- if app:
- filename_map = app.config.autosummary_filename_map
- else:
- filename_map = {}
-
- # write
- for entry in sorted(set(items), key=str):
- if entry.path is None:
- # The corresponding autosummary:: directive did not have
- # a :toctree: option
- continue
-
- path = output_dir or os.path.abspath(entry.path)
- ensuredir(path)
-
- try:
- name, obj, parent, modname = import_by_name(entry.name, grouped_exception=True)
- qualname = name.replace(modname + ".", "")
- except ImportExceptionGroup as exc:
- try:
- # try to import as an instance attribute
- name, obj, parent, modname = import_ivar_by_name(entry.name)
- qualname = name.replace(modname + ".", "")
- except ImportError as exc2:
- if exc2.__cause__:
- exceptions: List[BaseException] = exc.exceptions + [exc2.__cause__]
- else:
- exceptions = exc.exceptions + [exc2]
-
- errors = list(set("* %s: %s" % (type(e).__name__, e) for e in exceptions))
- logger.warning(__('[autosummary] failed to import %s.\nPossible hints:\n%s'),
- entry.name, '\n'.join(errors))
- continue
-
- context: Dict[str, Any] = {}
- if app:
- context.update(app.config.autosummary_context)
-
- content = generate_autosummary_content(name, obj, parent, template, entry.template,
- imported_members, app, entry.recursive, context,
- modname, qualname)
- try:
- py_source_rel = get_full_modname(modname, qualname).replace('.', '/') + '.py'
- except:
- logger.warning(name)
- py_source_rel = ''
-
- re_view = f"\n.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/{app.config.docs_branch}/" + \
- f"resource/_static/logo_source_en.svg\n :target: " + app.config.giturl + \
- f"{app.config.copy_repo}/blob/{app.config.branch}/" + app.config.repo_whl + \
- py_source_rel.split(app.config.cst_module_name)[-1] + '\n :alt: View Source On Gitee\n\n'
-
- if re_view not in content and py_source_rel:
- content = re.sub('([=]{5,})\n', r'\1\n' + re_view, content, 1)
- filename = os.path.join(path, filename_map.get(name, name) + suffix)
- if os.path.isfile(filename):
- with open(filename, encoding=encoding) as f:
- old_content = f.read()
-
- if content == old_content:
- continue
- elif overwrite: # content has changed
- with open(filename, 'w', encoding=encoding) as f:
- f.write(content)
- new_files.append(filename)
- else:
- with open(filename, 'w', encoding=encoding) as f:
- f.write(content)
- new_files.append(filename)
-
- # descend recursively to new files
- if new_files:
- generate_autosummary_docs(new_files, output_dir=output_dir,
- suffix=suffix, base_path=base_path,
- builder=builder, template_dir=template_dir,
- imported_members=imported_members, app=app,
- overwrite=overwrite)
-
-
-# -- Finding documented entries in files ---------------------------------------
-
-def find_autosummary_in_files(filenames: List[str]) -> List[AutosummaryEntry]:
- """Find out what items are documented in source/*.rst.
-
- See `find_autosummary_in_lines`.
- """
- documented: List[AutosummaryEntry] = []
- for filename in filenames:
- with open(filename, encoding='utf-8', errors='ignore') as f:
- lines = f.read().splitlines()
- documented.extend(find_autosummary_in_lines(lines, filename=filename))
- return documented
-
-
-def find_autosummary_in_docstring(name: str, module: str = None, filename: str = None
- ) -> List[AutosummaryEntry]:
- """Find out what items are documented in the given object's docstring.
-
- See `find_autosummary_in_lines`.
- """
- if module:
- warnings.warn('module argument for find_autosummary_in_docstring() is deprecated.',
- RemovedInSphinx50Warning, stacklevel=2)
-
- try:
- real_name, obj, parent, modname = import_by_name(name, grouped_exception=True)
- lines = pydoc.getdoc(obj).splitlines()
- return find_autosummary_in_lines(lines, module=name, filename=filename)
- except AttributeError:
- pass
- except ImportExceptionGroup as exc:
- errors = list(set("* %s: %s" % (type(e).__name__, e) for e in exc.exceptions))
- print('Failed to import %s.\nPossible hints:\n%s' % (name, '\n'.join(errors)))
- except SystemExit:
- print("Failed to import '%s'; the module executes module level "
- "statement and it might call sys.exit()." % name)
- return []
-
-
-def find_autosummary_in_lines(lines: List[str], module: str = None, filename: str = None
- ) -> List[AutosummaryEntry]:
- """Find out what items appear in autosummary:: directives in the
- given lines.
-
- Returns a list of (name, toctree, template) where *name* is a name
- of an object and *toctree* the :toctree: path of the corresponding
- autosummary directive (relative to the root of the file name), and
- *template* the value of the :template: option. *toctree* and
- *template* ``None`` if the directive does not have the
- corresponding options set.
- """
- autosummary_re = re.compile(r'^(\s*)\.\.\s+(ms[a-z]*)?autosummary::\s*')
- automodule_re = re.compile(
- r'^\s*\.\.\s+automodule::\s*([A-Za-z0-9_.]+)\s*$')
- module_re = re.compile(
- r'^\s*\.\.\s+(current)?module::\s*([a-zA-Z0-9_.]+)\s*$')
- autosummary_item_re = re.compile(r'^\s+(~?[_a-zA-Z][a-zA-Z0-9_.]*)\s*.*?')
- recursive_arg_re = re.compile(r'^\s+:recursive:\s*$')
- toctree_arg_re = re.compile(r'^\s+:toctree:\s*(.*?)\s*$')
- template_arg_re = re.compile(r'^\s+:template:\s*(.*?)\s*$')
-
- documented: List[AutosummaryEntry] = []
-
- recursive = False
- toctree: str = None
- template = None
- current_module = module
- in_autosummary = False
- base_indent = ""
-
- for line in lines:
- if in_autosummary:
- m = recursive_arg_re.match(line)
- if m:
- recursive = True
- continue
-
- m = toctree_arg_re.match(line)
- if m:
- toctree = m.group(1)
- if filename:
- toctree = os.path.join(os.path.dirname(filename),
- toctree)
- continue
-
- m = template_arg_re.match(line)
- if m:
- template = m.group(1).strip()
- continue
-
- if line.strip().startswith(':'):
- continue # skip options
-
- m = autosummary_item_re.match(line)
- if m:
- name = m.group(1).strip()
- if name.startswith('~'):
- name = name[1:]
- if current_module and \
- not name.startswith(current_module + '.'):
- name = "%s.%s" % (current_module, name)
- documented.append(AutosummaryEntry(name, toctree, template, recursive))
- continue
-
- if not line.strip() or line.startswith(base_indent + " "):
- continue
-
- in_autosummary = False
-
- m = autosummary_re.match(line)
- if m:
- in_autosummary = True
- base_indent = m.group(1)
- recursive = False
- toctree = None
- template = None
- continue
-
- m = automodule_re.search(line)
- if m:
- current_module = m.group(1).strip()
- # recurse into the automodule docstring
- documented.extend(find_autosummary_in_docstring(
- current_module, filename=filename))
- continue
-
- m = module_re.match(line)
- if m:
- current_module = m.group(2)
- continue
-
- return documented
-
-
-def get_parser() -> argparse.ArgumentParser:
- parser = argparse.ArgumentParser(
- usage='%(prog)s [OPTIONS] ...',
- epilog=__('For more information, visit .'),
- description=__("""
-Generate ReStructuredText using autosummary directives.
-
-sphinx-autogen is a frontend to sphinx.ext.autosummary.generate. It generates
-the reStructuredText files from the autosummary directives contained in the
-given input files.
-
-The format of the autosummary directive is documented in the
-``sphinx.ext.autosummary`` Python module and can be read using::
-
- pydoc sphinx.ext.autosummary
-"""))
-
- parser.add_argument('--version', action='version', dest='show_version',
- version='%%(prog)s %s' % __display_version__)
-
- parser.add_argument('source_file', nargs='+',
- help=__('source files to generate rST files for'))
-
- parser.add_argument('-o', '--output-dir', action='store',
- dest='output_dir',
- help=__('directory to place all output in'))
- parser.add_argument('-s', '--suffix', action='store', dest='suffix',
- default='rst',
- help=__('default suffix for files (default: '
- '%(default)s)'))
- parser.add_argument('-t', '--templates', action='store', dest='templates',
- default=None,
- help=__('custom template directory (default: '
- '%(default)s)'))
- parser.add_argument('-i', '--imported-members', action='store_true',
- dest='imported_members', default=False,
- help=__('document imported members (default: '
- '%(default)s)'))
- parser.add_argument('-a', '--respect-module-all', action='store_true',
- dest='respect_module_all', default=False,
- help=__('document exactly the members in module __all__ attribute. '
- '(default: %(default)s)'))
-
- return parser
-
-
-def main(argv: List[str] = sys.argv[1:]) -> None:
- sphinx.locale.setlocale(locale.LC_ALL, '')
- sphinx.locale.init_console(os.path.join(package_dir, 'locale'), 'sphinx')
- translator, _ = sphinx.locale.init([], None)
-
- app = DummyApplication(translator)
- logging.setup(app, sys.stdout, sys.stderr) # type: ignore
- setup_documenters(app)
- args = get_parser().parse_args(argv)
-
- if args.templates:
- app.config.templates_path.append(path.abspath(args.templates))
- app.config.autosummary_ignore_module_all = not args.respect_module_all # type: ignore
-
- generate_autosummary_docs(args.source_file, args.output_dir,
- '.' + args.suffix,
- imported_members=args.imported_members,
- app=app)
-
-
-if __name__ == '__main__':
- main()
diff --git a/docs/federated/docs/_ext/overwriteobjectiondirective.txt b/docs/federated/docs/_ext/overwriteobjectiondirective.txt
deleted file mode 100644
index 8a58bf71191f77ca22097ea9de244c9df5c3d4fb..0000000000000000000000000000000000000000
--- a/docs/federated/docs/_ext/overwriteobjectiondirective.txt
+++ /dev/null
@@ -1,368 +0,0 @@
-"""
- sphinx.directives
- ~~~~~~~~~~~~~~~~~
-
- Handlers for additional ReST directives.
-
- :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-import re
-import inspect
-import importlib
-from typing import TYPE_CHECKING, Any, Dict, Generic, List, Tuple, TypeVar, cast
-
-from docutils import nodes
-from docutils.nodes import Node
-from docutils.parsers.rst import directives, roles
-
-from sphinx import addnodes
-from sphinx.addnodes import desc_signature
-from sphinx.deprecation import RemovedInSphinx50Warning, deprecated_alias
-from sphinx.util import docutils, logging
-from sphinx.util.docfields import DocFieldTransformer, Field, TypedField
-from sphinx.util.docutils import SphinxDirective
-from sphinx.util.typing import OptionSpec
-
-if TYPE_CHECKING:
- from sphinx.application import Sphinx
-
-
-# RE to strip backslash escapes
-nl_escape_re = re.compile(r'\\\n')
-strip_backslash_re = re.compile(r'\\(.)')
-
-T = TypeVar('T')
-logger = logging.getLogger(__name__)
-
-def optional_int(argument: str) -> int:
- """
- Check for an integer argument or None value; raise ``ValueError`` if not.
- """
- if argument is None:
- return None
- else:
- value = int(argument)
- if value < 0:
- raise ValueError('negative value; must be positive or zero')
- return value
-
-def get_api(fullname):
- try:
- module_name, api_name= ".".join(fullname.split('.')[:-1]), fullname.split('.')[-1]
- module_import = importlib.import_module(module_name)
- except ModuleNotFoundError:
- module_name, api_name = ".".join(fullname.split('.')[:-2]), ".".join(fullname.split('.')[-2:])
- module_import = importlib.import_module(module_name)
- api = eval(f"module_import.{api_name}")
- return api
-
-def get_example(name: str):
- try:
- api_doc = inspect.getdoc(get_api(name))
- example_str = re.findall(r'Examples:\n([\w\W]*?)(\n\n|$)', api_doc)
- if not example_str:
- return []
- example_str = re.sub(r'\n\s+', r'\n', example_str[0][0])
- example_str = example_str.strip()
- example_list = example_str.split('\n')
- return ["", "**样例:**", ""] + example_list + [""]
- except:
- return []
-
-def get_platforms(name: str):
- try:
- api_doc = inspect.getdoc(get_api(name))
- example_str = re.findall(r'Supported Platforms:\n\s+(.*?)\n\n', api_doc)
- if not example_str:
- example_str_leak = re.findall(r'Supported Platforms:\n\s+(.*)', api_doc)
- if example_str_leak:
- example_str = example_str_leak[0].strip()
- example_list = example_str.split('\n')
- example_list = [' ' + example_list[0]]
- return ["", "支持平台:"] + example_list + [""]
- return []
- example_str = example_str[0].strip()
- example_list = example_str.split('\n')
- example_list = [' ' + example_list[0]]
- return ["", "支持平台:"] + example_list + [""]
- except:
- return []
-
-class ObjectDescription(SphinxDirective, Generic[T]):
- """
- Directive to describe a class, function or similar object. Not used
- directly, but subclassed (in domain-specific directives) to add custom
- behavior.
- """
-
- has_content = True
- required_arguments = 1
- optional_arguments = 0
- final_argument_whitespace = True
- option_spec: OptionSpec = {
- 'noindex': directives.flag,
- } # type: Dict[str, DirectiveOption]
-
- # types of doc fields that this directive handles, see sphinx.util.docfields
- doc_field_types: List[Field] = []
- domain: str = None
- objtype: str = None
- indexnode: addnodes.index = None
-
- # Warning: this might be removed in future version. Don't touch this from extensions.
- _doc_field_type_map = {} # type: Dict[str, Tuple[Field, bool]]
-
- def get_field_type_map(self) -> Dict[str, Tuple[Field, bool]]:
- if self._doc_field_type_map == {}:
- self._doc_field_type_map = {}
- for field in self.doc_field_types:
- for name in field.names:
- self._doc_field_type_map[name] = (field, False)
-
- if field.is_typed:
- typed_field = cast(TypedField, field)
- for name in typed_field.typenames:
- self._doc_field_type_map[name] = (field, True)
-
- return self._doc_field_type_map
-
- def get_signatures(self) -> List[str]:
- """
- Retrieve the signatures to document from the directive arguments. By
- default, signatures are given as arguments, one per line.
-
- Backslash-escaping of newlines is supported.
- """
- lines = nl_escape_re.sub('', self.arguments[0]).split('\n')
- if self.config.strip_signature_backslash:
- # remove backslashes to support (dummy) escapes; helps Vim highlighting
- return [strip_backslash_re.sub(r'\1', line.strip()) for line in lines]
- else:
- return [line.strip() for line in lines]
-
- def handle_signature(self, sig: str, signode: desc_signature) -> Any:
- """
- Parse the signature *sig* into individual nodes and append them to
- *signode*. If ValueError is raised, parsing is aborted and the whole
- *sig* is put into a single desc_name node.
-
- The return value should be a value that identifies the object. It is
- passed to :meth:`add_target_and_index()` unchanged, and otherwise only
- used to skip duplicates.
- """
- raise ValueError
-
- def add_target_and_index(self, name: Any, sig: str, signode: desc_signature) -> None:
- """
- Add cross-reference IDs and entries to self.indexnode, if applicable.
-
- *name* is whatever :meth:`handle_signature()` returned.
- """
- return # do nothing by default
-
- def before_content(self) -> None:
- """
- Called before parsing content. Used to set information about the current
- directive context on the build environment.
- """
- pass
-
- def transform_content(self, contentnode: addnodes.desc_content) -> None:
- """
- Called after creating the content through nested parsing,
- but before the ``object-description-transform`` event is emitted,
- and before the info-fields are transformed.
- Can be used to manipulate the content.
- """
- pass
-
- def after_content(self) -> None:
- """
- Called after parsing content. Used to reset information about the
- current directive context on the build environment.
- """
- pass
-
- def check_class_end(self, content):
- for i in content:
- if not i.startswith('.. include::') and i != "\n" and i != "":
- return False
- return True
-
- def extend_items(self, rst_file, start_num, num):
- ls = []
- for i in range(1, num+1):
- ls.append((rst_file, start_num+i))
- return ls
-
- def run(self) -> List[Node]:
- """
- Main directive entry function, called by docutils upon encountering the
- directive.
-
- This directive is meant to be quite easily subclassable, so it delegates
- to several additional methods. What it does:
-
- * find out if called as a domain-specific directive, set self.domain
- * create a `desc` node to fit all description inside
- * parse standard options, currently `noindex`
- * create an index node if needed as self.indexnode
- * parse all given signatures (as returned by self.get_signatures())
- using self.handle_signature(), which should either return a name
- or raise ValueError
- * add index entries using self.add_target_and_index()
- * parse the content and handle doc fields in it
- """
- if ':' in self.name:
- self.domain, self.objtype = self.name.split(':', 1)
- else:
- self.domain, self.objtype = '', self.name
- self.indexnode = addnodes.index(entries=[])
-
- node = addnodes.desc()
- node.document = self.state.document
- node['domain'] = self.domain
- # 'desctype' is a backwards compatible attribute
- node['objtype'] = node['desctype'] = self.objtype
- node['noindex'] = noindex = ('noindex' in self.options)
- if self.domain:
- node['classes'].append(self.domain)
- node['classes'].append(node['objtype'])
-
- self.names: List[T] = []
- signatures = self.get_signatures()
- for sig in signatures:
- # add a signature node for each signature in the current unit
- # and add a reference target for it
- signode = addnodes.desc_signature(sig, '')
- self.set_source_info(signode)
- node.append(signode)
- try:
- # name can also be a tuple, e.g. (classname, objname);
- # this is strictly domain-specific (i.e. no assumptions may
- # be made in this base class)
- name = self.handle_signature(sig, signode)
- except ValueError:
- # signature parsing failed
- signode.clear()
- signode += addnodes.desc_name(sig, sig)
- continue # we don't want an index entry here
- if name not in self.names:
- self.names.append(name)
- if not noindex:
- # only add target and index entry if this is the first
- # description of the object with this name in this desc block
- self.add_target_and_index(name, sig, signode)
-
- contentnode = addnodes.desc_content()
- node.append(contentnode)
- if self.names:
- # needed for association of version{added,changed} directives
- self.env.temp_data['object'] = self.names[0]
- self.before_content()
- try:
- example = get_example(self.names[0][0])
- platforms = get_platforms(self.names[0][0])
- except Exception as e:
- example = ''
- platforms = ''
- logger.warning(f'Error API names in {self.arguments[0]}.')
- logger.warning(f'{e}')
- extra = platforms + example
- if extra:
- if self.objtype == "method":
- self.content.data.extend(extra)
- else:
- index_num = 0
- for num, i in enumerate(self.content.data):
- if i.startswith('.. py:method::') or self.check_class_end(self.content.data[num:]):
- index_num = num
- break
- if index_num:
- count = len(self.content.data)
- for i in extra:
- self.content.data.insert(index_num-count, i)
- else:
- self.content.data.extend(extra)
- try:
- self.content.items.extend(self.extend_items(self.content.items[0][0], self.content.items[-1][1], len(extra)))
- except Exception as e:
- logger.warning(f'{e}')
- self.state.nested_parse(self.content, self.content_offset, contentnode)
- self.transform_content(contentnode)
- self.env.app.emit('object-description-transform',
- self.domain, self.objtype, contentnode)
- DocFieldTransformer(self).transform_all(contentnode)
- self.env.temp_data['object'] = None
- self.after_content()
- return [self.indexnode, node]
-
-
-class DefaultRole(SphinxDirective):
- """
- Set the default interpreted text role. Overridden from docutils.
- """
-
- optional_arguments = 1
- final_argument_whitespace = False
-
- def run(self) -> List[Node]:
- if not self.arguments:
- docutils.unregister_role('')
- return []
- role_name = self.arguments[0]
- role, messages = roles.role(role_name, self.state_machine.language,
- self.lineno, self.state.reporter)
- if role:
- docutils.register_role('', role)
- self.env.temp_data['default_role'] = role_name
- else:
- literal_block = nodes.literal_block(self.block_text, self.block_text)
- reporter = self.state.reporter
- error = reporter.error('Unknown interpreted text role "%s".' % role_name,
- literal_block, line=self.lineno)
- messages += [error]
-
- return cast(List[nodes.Node], messages)
-
-
-class DefaultDomain(SphinxDirective):
- """
- Directive to (re-)set the default domain for this source file.
- """
-
- has_content = False
- required_arguments = 1
- optional_arguments = 0
- final_argument_whitespace = False
- option_spec = {} # type: Dict
-
- def run(self) -> List[Node]:
- domain_name = self.arguments[0].lower()
- # if domain_name not in env.domains:
- # # try searching by label
- # for domain in env.domains.values():
- # if domain.label.lower() == domain_name:
- # domain_name = domain.name
- # break
- self.env.temp_data['default_domain'] = self.env.domains.get(domain_name)
- return []
-
-def setup(app: "Sphinx") -> Dict[str, Any]:
- app.add_config_value("strip_signature_backslash", False, 'env')
- directives.register_directive('default-role', DefaultRole)
- directives.register_directive('default-domain', DefaultDomain)
- directives.register_directive('describe', ObjectDescription)
- # new, more consistent, name
- directives.register_directive('object', ObjectDescription)
-
- app.add_event('object-description-transform')
-
- return {
- 'version': 'builtin',
- 'parallel_read_safe': True,
- 'parallel_write_safe': True,
- }
-
diff --git a/docs/federated/docs/_ext/overwriteviewcode.txt b/docs/federated/docs/_ext/overwriteviewcode.txt
deleted file mode 100644
index 172780ec56b3ed90e7b0add617257a618cf38ee0..0000000000000000000000000000000000000000
--- a/docs/federated/docs/_ext/overwriteviewcode.txt
+++ /dev/null
@@ -1,378 +0,0 @@
-"""
- sphinx.ext.viewcode
- ~~~~~~~~~~~~~~~~~~~
-
- Add links to module code in Python object descriptions.
-
- :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-import posixpath
-import traceback
-import warnings
-from os import path
-from typing import Any, Dict, Generator, Iterable, Optional, Set, Tuple, cast
-
-from docutils import nodes
-from docutils.nodes import Element, Node
-
-import sphinx
-from sphinx import addnodes
-from sphinx.application import Sphinx
-from sphinx.builders import Builder
-from sphinx.builders.html import StandaloneHTMLBuilder
-from sphinx.deprecation import RemovedInSphinx50Warning
-from sphinx.environment import BuildEnvironment
-from sphinx.locale import _, __
-from sphinx.pycode import ModuleAnalyzer
-from sphinx.transforms.post_transforms import SphinxPostTransform
-from sphinx.util import get_full_modname, logging, status_iterator
-from sphinx.util.nodes import make_refnode
-
-
-logger = logging.getLogger(__name__)
-
-
-OUTPUT_DIRNAME = '_modules'
-
-
-class viewcode_anchor(Element):
- """Node for viewcode anchors.
-
- This node will be processed in the resolving phase.
- For viewcode supported builders, they will be all converted to the anchors.
- For not supported builders, they will be removed.
- """
-
-
-def _get_full_modname(app: Sphinx, modname: str, attribute: str) -> Optional[str]:
- try:
- return get_full_modname(modname, attribute)
- except AttributeError:
- # sphinx.ext.viewcode can't follow class instance attribute
- # then AttributeError logging output only verbose mode.
- logger.verbose('Didn\'t find %s in %s', attribute, modname)
- return None
- except Exception as e:
- # sphinx.ext.viewcode follow python domain directives.
- # because of that, if there are no real modules exists that specified
- # by py:function or other directives, viewcode emits a lot of warnings.
- # It should be displayed only verbose mode.
- logger.verbose(traceback.format_exc().rstrip())
- logger.verbose('viewcode can\'t import %s, failed with error "%s"', modname, e)
- return None
-
-
-def is_supported_builder(builder: Builder) -> bool:
- if builder.format != 'html':
- return False
- elif builder.name == 'singlehtml':
- return False
- elif builder.name.startswith('epub') and not builder.config.viewcode_enable_epub:
- return False
- else:
- return True
-
-
-def doctree_read(app: Sphinx, doctree: Node) -> None:
- env = app.builder.env
- if not hasattr(env, '_viewcode_modules'):
- env._viewcode_modules = {} # type: ignore
-
- def has_tag(modname: str, fullname: str, docname: str, refname: str) -> bool:
- entry = env._viewcode_modules.get(modname, None) # type: ignore
- if entry is False:
- return False
-
- code_tags = app.emit_firstresult('viewcode-find-source', modname)
- if code_tags is None:
- try:
- analyzer = ModuleAnalyzer.for_module(modname)
- analyzer.find_tags()
- except Exception:
- env._viewcode_modules[modname] = False # type: ignore
- return False
-
- code = analyzer.code
- tags = analyzer.tags
- else:
- code, tags = code_tags
-
- if entry is None or entry[0] != code:
- entry = code, tags, {}, refname
- env._viewcode_modules[modname] = entry # type: ignore
- _, tags, used, _ = entry
- if fullname in tags:
- used[fullname] = docname
- return True
-
- return False
-
- for objnode in list(doctree.findall(addnodes.desc)):
- if objnode.get('domain') != 'py':
- continue
- names: Set[str] = set()
- for signode in objnode:
- if not isinstance(signode, addnodes.desc_signature):
- continue
- modname = signode.get('module')
- fullname = signode.get('fullname')
- try:
- if fullname and modname==None:
- if fullname.split('.')[-1].lower() == fullname.split('.')[-1] and fullname.split('.')[-2].lower() != fullname.split('.')[-2]:
- modname = '.'.join(fullname.split('.')[:-2])
- fullname = '.'.join(fullname.split('.')[-2:])
- else:
- modname = '.'.join(fullname.split('.')[:-1])
- fullname = fullname.split('.')[-1]
- fullname_new = fullname
- except Exception:
- logger.warning(f'error_modename:{modname}')
- logger.warning(f'error_fullname:{fullname}')
- refname = modname
- if env.config.viewcode_follow_imported_members:
- new_modname = app.emit_firstresult(
- 'viewcode-follow-imported', modname, fullname,
- )
- if not new_modname:
- new_modname = _get_full_modname(app, modname, fullname)
- modname = new_modname
- # logger.warning(f'new_modename:{modname}')
- if not modname:
- continue
- # fullname = signode.get('fullname')
- # if fullname and modname==None:
- fullname = fullname_new
- if not has_tag(modname, fullname, env.docname, refname):
- continue
- if fullname in names:
- # only one link per name, please
- continue
- names.add(fullname)
- pagename = posixpath.join(OUTPUT_DIRNAME, modname.replace('.', '/'))
- signode += viewcode_anchor(reftarget=pagename, refid=fullname, refdoc=env.docname)
-
-
-def env_merge_info(app: Sphinx, env: BuildEnvironment, docnames: Iterable[str],
- other: BuildEnvironment) -> None:
- if not hasattr(other, '_viewcode_modules'):
- return
- # create a _viewcode_modules dict on the main environment
- if not hasattr(env, '_viewcode_modules'):
- env._viewcode_modules = {} # type: ignore
- # now merge in the information from the subprocess
- for modname, entry in other._viewcode_modules.items(): # type: ignore
- if modname not in env._viewcode_modules: # type: ignore
- env._viewcode_modules[modname] = entry # type: ignore
- else:
- if env._viewcode_modules[modname]: # type: ignore
- used = env._viewcode_modules[modname][2] # type: ignore
- for fullname, docname in entry[2].items():
- if fullname not in used:
- used[fullname] = docname
-
-
-def env_purge_doc(app: Sphinx, env: BuildEnvironment, docname: str) -> None:
- modules = getattr(env, '_viewcode_modules', {})
-
- for modname, entry in list(modules.items()):
- if entry is False:
- continue
-
- code, tags, used, refname = entry
- for fullname in list(used):
- if used[fullname] == docname:
- used.pop(fullname)
-
- if len(used) == 0:
- modules.pop(modname)
-
-
-class ViewcodeAnchorTransform(SphinxPostTransform):
- """Convert or remove viewcode_anchor nodes depends on builder."""
- default_priority = 100
-
- def run(self, **kwargs: Any) -> None:
- if is_supported_builder(self.app.builder):
- self.convert_viewcode_anchors()
- else:
- self.remove_viewcode_anchors()
-
- def convert_viewcode_anchors(self) -> None:
- for node in self.document.findall(viewcode_anchor):
- anchor = nodes.inline('', _('[源代码]'), classes=['viewcode-link'])
- refnode = make_refnode(self.app.builder, node['refdoc'], node['reftarget'],
- node['refid'], anchor)
- node.replace_self(refnode)
-
- def remove_viewcode_anchors(self) -> None:
- for node in list(self.document.findall(viewcode_anchor)):
- node.parent.remove(node)
-
-
-def missing_reference(app: Sphinx, env: BuildEnvironment, node: Element, contnode: Node
- ) -> Optional[Node]:
- # resolve our "viewcode" reference nodes -- they need special treatment
- if node['reftype'] == 'viewcode':
- warnings.warn('viewcode extension is no longer use pending_xref node. '
- 'Please update your extension.', RemovedInSphinx50Warning)
- return make_refnode(app.builder, node['refdoc'], node['reftarget'],
- node['refid'], contnode)
-
- return None
-
-
-def get_module_filename(app: Sphinx, modname: str) -> Optional[str]:
- """Get module filename for *modname*."""
- source_info = app.emit_firstresult('viewcode-find-source', modname)
- if source_info:
- return None
- else:
- try:
- filename, source = ModuleAnalyzer.get_module_source(modname)
- return filename
- except Exception:
- return None
-
-
-def should_generate_module_page(app: Sphinx, modname: str) -> bool:
- """Check generation of module page is needed."""
- module_filename = get_module_filename(app, modname)
- if module_filename is None:
- # Always (re-)generate module page when module filename is not found.
- return True
-
- builder = cast(StandaloneHTMLBuilder, app.builder)
- basename = modname.replace('.', '/') + builder.out_suffix
- page_filename = path.join(app.outdir, '_modules/', basename)
-
- try:
- if path.getmtime(module_filename) <= path.getmtime(page_filename):
- # generation is not needed if the HTML page is newer than module file.
- return False
- except IOError:
- pass
-
- return True
-
-
-def collect_pages(app: Sphinx) -> Generator[Tuple[str, Dict[str, Any], str], None, None]:
- env = app.builder.env
- if not hasattr(env, '_viewcode_modules'):
- return
- if not is_supported_builder(app.builder):
- return
- highlighter = app.builder.highlighter # type: ignore
- urito = app.builder.get_relative_uri
-
- modnames = set(env._viewcode_modules) # type: ignore
-
- for modname, entry in status_iterator(
- sorted(env._viewcode_modules.items()), # type: ignore
- __('highlighting module code... '), "blue",
- len(env._viewcode_modules), # type: ignore
- app.verbosity, lambda x: x[0]):
- if not entry:
- continue
- if not should_generate_module_page(app, modname):
- continue
-
- code, tags, used, refname = entry
- # construct a page name for the highlighted source
- pagename = posixpath.join(OUTPUT_DIRNAME, modname.replace('.', '/'))
- # highlight the source using the builder's highlighter
- if env.config.highlight_language in ('python3', 'default', 'none'):
- lexer = env.config.highlight_language
- else:
- lexer = 'python'
- highlighted = highlighter.highlight_block(code, lexer, linenos=False)
- # split the code into lines
- lines = highlighted.splitlines()
- # split off wrap markup from the first line of the actual code
- before, after = lines[0].split('
')
- lines[0:1] = [before + '
', after]
- # nothing to do for the last line; it always starts with
anyway
- # now that we have code lines (starting at index 1), insert anchors for
- # the collected tags (HACK: this only works if the tag boundaries are
- # properly nested!)
- maxindex = len(lines) - 1
- for name, docname in used.items():
- type, start, end = tags[name]
- backlink = urito(pagename, docname) + '#' + refname + '.' + name
- lines[start] = (
- '
') +
- ''.join(html)),
- }
-
- yield (posixpath.join(OUTPUT_DIRNAME, 'index'), context, 'page.html')
-
-
-def setup(app: Sphinx) -> Dict[str, Any]:
- app.add_config_value('viewcode_import', None, False)
- app.add_config_value('viewcode_enable_epub', False, False)
- app.add_config_value('viewcode_follow_imported_members', True, False)
- app.connect('doctree-read', doctree_read)
- app.connect('env-merge-info', env_merge_info)
- app.connect('env-purge-doc', env_purge_doc)
- app.connect('html-collect-pages', collect_pages)
- app.connect('missing-reference', missing_reference)
- # app.add_config_value('viewcode_include_modules', [], 'env')
- # app.add_config_value('viewcode_exclude_modules', [], 'env')
- app.add_event('viewcode-find-source')
- app.add_event('viewcode-follow-imported')
- app.add_post_transform(ViewcodeAnchorTransform)
- return {
- 'version': sphinx.__display_version__,
- 'env_version': 1,
- 'parallel_read_safe': True
- }
diff --git a/docs/federated/docs/requirements.txt b/docs/federated/docs/requirements.txt
deleted file mode 100644
index a1b6a69f6dbd9c6f78710f56889e14f0e85b27f4..0000000000000000000000000000000000000000
--- a/docs/federated/docs/requirements.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-sphinx == 4.4.0
-docutils == 0.17.1
-myst-parser == 0.18.1
-sphinx_rtd_theme == 1.0.0
-numpy
-IPython
-jieba
diff --git a/docs/federated/docs/source_en/Data_Join.rst b/docs/federated/docs/source_en/Data_Join.rst
deleted file mode 100644
index 85f2da5d13e7350bf3a9f748fa4cdad8e8efe815..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/Data_Join.rst
+++ /dev/null
@@ -1,12 +0,0 @@
-Data Join
-=====================
-
-.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg
- :target: https://gitee.com/mindspore/docs/blob/master/docs/federated/docs/source_en/Data_Join.rst
- :alt: View Source on Gitee
-
-.. toctree::
- :maxdepth: 1
-
- data_join/data_join
- data_join/private_set_intersection
\ No newline at end of file
diff --git a/docs/federated/docs/source_en/communication_compression.md b/docs/federated/docs/source_en/communication_compression.md
deleted file mode 100644
index c797eb4d9cc29d5d75043b2ad80de46d09bb01b4..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/communication_compression.md
+++ /dev/null
@@ -1,139 +0,0 @@
-# Device-Cloud Federated Learning Communication Compression
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/federated/docs/source_en/communication_compression.md)
-
-During the horizontal device-side federated learning training process, the traffic volume affects the user experience of the device-side (user traffic, communication latency, number of FL-Client participants) and is limited by the cloud-side performance constraints (memory, bandwidth, CPU usage). To improve user experience and reduce performance bottlenecks, MindSpore federated learning framework provides traffic compression for upload and download in device-cloud federated scenarios.
-
-## Compression Method
-
-### Uploading Compression Method
-
-The upload compression method can be divided into three main parts: weight difference codec, sparse codec and quantization codec. The flowcharts on FL-Client and FL-Server are given below.
-
-
-
-Fig.1 Flowchart of the upload compression method on FL-Client
-
-
-
-Fig.2 Flowchart of the upload compression method on FL-Server
-
-### Weight Difference Codec
-
-The weight difference is the vector difference of the weight matrix before and after the device-side training. Compared with the original weights, the distribution of the weight difference is more in line with the Gaussian distribution and therefore more suitable to be compressed. FL-Client performs the encoding operation on the weight difference, while FL-Server performs the decoding operation. Note that in order to reduce the weight difference to weights before FL-Server aggregates the weights, FL-Client does not multiply the weights by the amount of data when uploading the weights. When FL-Server decodes, it needs to multiply the weights by the amount of data.
-
-
-
-Fig.3 Flow chart of weight difference encoding on FL-Client
-
-
-
-Fig.4 Flow chart of weight difference decoding on FL-Server
-
-### Sparse Codec
-
-The device-side and cloud-side follow the same random algorithm to generate a sparse mask matrix that has the same shape as the original weights that need to be uploaded. The mask matrix contains only two values, 0 or 1. Each FL-Client only uploads data with the same weight as the non-zero value position of the mask matrix to the FL-Server.
-
-Take the sparse method with a sparse rate of sparse_rate=0.08 as an example. The parameters that are required to be uploaded by FL-Client:
-
-| Parameters | Length |
-| -------------------- | ----- |
-| albert.pooler.weight | 97344 |
-| albert.pooler.bias | 312 |
-| classifier.weight | 1560 |
-| classifier.bias | 5 |
-
-Concatenate all parameters as one-dimensional vectors:
-
-| Parameters | Length |
-| ----------- | ---------------------- |
-| merged_data | 97344+312+1560+5=99221 |
-
-Generate a mask vector with the same length as the concatenated parameter. There are 7937 values of 1, i.e., 7937 = int(sparse_rate*concatenated parameter length) and the rest have a value of 0, i.e., mask_vector = (1,1,1,... ,0,0,0,...):
-
-| Parameters | Length |
-| ----------- | --------- |
-| mask_vector | 99221 |
-
-Use a pseudo-random algorithm to randomize the mask_vector. The random seed is the current number of iteration. Take out the indexes in the mask_vector with value 1. Take out the value of merged_data[indexes], i.e. the compressed vector.
-
-| Parameters | Length |
-| ----------- | --------- |
-| compressed_vector | 7937 |
-
-After sparse compression, the parameter that FL-Client needs to upload is the compressed_vector.
-
-After receiving the compressed_vector, FL-Server first constructs the mask vector mask_vector with the same pseudo-random algorithm and random seeds as FL-Client. Then it takes out the indexes with the value of 1 in the mask_vector. Generate the all-zero matrix with the same shape as the model. The values in compressed_vector are put into weight_vector[indexes] in turn. weight_vector is the sparsely decoded vector.
-
-### Quantization Codec
-
-The quantization compression method is approximating communication data fixed-point of floating-point type to a finite number of discrete values.
-
-Taking the 8-bit quantization as an example:
-
-Quantify the number of bits num_bits = 8
-
-The floating-point data before compression is
-
-data = [0.03356021, -0.01842778, -0.009684053, 0.025363436, -0.027571501, 0.0077043395, 0.016391572, -0.03598478, -0.0009508357]
-
-Compute the max and min values:
-
-min_val = -0.03598478
-
-max_val = 0.03356021
-
-Calculate scaling factor:
-
-scale = (max_val - min_val ) / (2 ^ num_bits - 1) = 0.000272725450980392
-
-Convert the pre-compressed data to an integer between -128 and 127 with the conversion formula quant_data = round((data - min_val) / scale) - 2 ^ (num_bits - 1). And strongly convert the data type to int8:
-
-quant_data = [127, -64, -32, 97, -97, 32, 64, -128, 0]
-
-After the quantitative encoding, the parameters that FL-Client needs to upload are quant_data and the minimum and maximum values min_val and max_val.
-
-After receiving quant_data, min_val and max_val, FL-Server uses the inverse quantization formula (quant_data + 2 ^ (num_bits - 1)) * (max_val - min_val) / (2 ^ num_bits - 1) + min_val to reduce the weights.
-
-## Downloading Compression Method
-
-The download compression method is mainly a quantization codec operation, and the flow charts on FL-Server and FL-Client are given below.
-
-
-
-Fig.5 Flowchart of the download compression method on FL-Server
-
-
-
-Fig.6 Flowchart of the download compression method on FL-Client
-
-### Quantization Codec
-
-The quantization codec is the same as that in upload compression.
-
-## Code Implementation Preparation
-
-To use the upload and download compression methods, first successfully complete the training aggregation process for either device or cloud federated scenario, e.g. [Implementing a Sentiment Classification Application (Android)](https://www.mindspore.cn/federated/docs/en/master/sentiment_classification_application.html). The preparation work including datasets and network models and the simulation of the process to initiate multi-client participation in federated learning are described in detail in this document.
-
-## Algorithm Open Script
-
-The upload and download compression methods are currently only supported in the device-cloud federated learning scenario. The open method requires setting `upload_compress_type='DIFF_SPARSE_QUANT'` and `download_compress_type='QUANT'` in the corresponding yaml in the server startup script when starting the cloud-side service. The above two hyperparameters control the upload and download compression methods on and off, respectively.
-
-The relevant parameter configuration to start the algorithm is given in the cloud-side [full startup script](https://gitee.com/mindspore/federated/tree/master/tests/st/cross_device_cloud/). After determining the parameter configuration, the user needs to configure the corresponding parameters before executing the training, as follows:
-
-```yaml
-compression:
- upload_compress_type: NO_COMPRESS
- upload_sparse_rate: 0.4
- download_compress_type: NO_COMPRESS
-```
-
-| Hyperparameter Names and Reference Values | Hyperparameter Description |
-| ---------------------- | ------------------------------------------------------------ |
-| upload_compress_type | Upload compression type, string type, including: "NO_COMPRESS", "DIFF_SPARSE_QUANT" |
-| upload_sparse_rate | Sparse ratio, i.e., weight retention, float type, defined in the domain (0, 1] |
-| download_compress_type | Download compression type, string type, including: "NO_COMPRESS", "QUANT" |
-
-## ALBERT Results
-
-The total number of federated learning iterations is 100. The number of client local training epochs is 1. The number of clients is 20. The batchSize is set to 16. The learning rate is 1e-5. Both upload and download compression methods are turned on. The upload sparse ratio is 0.4. The final accuracy on the validation set is 72.5%, and 72.3% for the common federated scenario without compression.
diff --git a/docs/federated/docs/source_en/conf.py b/docs/federated/docs/source_en/conf.py
deleted file mode 100644
index e63a2a2d9ce5e931baad97629fdd909a7d7c71e4..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/conf.py
+++ /dev/null
@@ -1,204 +0,0 @@
-# Configuration file for the Sphinx documentation builder.
-#
-# This file only contains a selection of the most common options. For a full
-# list see the documentation:
-# https://www.sphinx-doc.org/en/master/usage/configuration.html
-
-# -- Path setup --------------------------------------------------------------
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-#
-import os
-import shutil
-import sys
-import IPython
-import re
-import sphinx.ext.autosummary.generate as g
-from sphinx.ext import autodoc as sphinx_autodoc
-
-import mindspore
-
-# -- Project information -----------------------------------------------------
-
-project = 'MindSpore'
-copyright = 'MindSpore'
-author = 'MindSpore'
-
-# The full version, including alpha/beta/rc tags
-release = 'master'
-
-# -- General configuration ---------------------------------------------------
-
-# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
-# ones.
-myst_enable_extensions = ["dollarmath", "amsmath"]
-
-
-myst_heading_anchors = 5
-extensions = [
- 'sphinx.ext.autodoc',
- 'sphinx.ext.doctest',
- 'sphinx.ext.intersphinx',
- 'sphinx.ext.todo',
- 'sphinx.ext.coverage',
- 'sphinx.ext.napoleon',
- 'sphinx.ext.viewcode',
- 'myst_parser',
- 'sphinx.ext.mathjax',
- 'IPython.sphinxext.ipython_console_highlighting'
-]
-
-source_suffix = {
- '.rst': 'restructuredtext',
- '.md': 'markdown',
-}
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-# This pattern also affects html_static_path and html_extra_path.
-mathjax_path = 'https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/mathjax/MathJax-3.2.2/es5/tex-mml-chtml.js'
-
-mathjax_options = {
- 'async':'async'
-}
-
-smartquotes_action = 'De'
-
-exclude_patterns = []
-
-pygments_style = 'sphinx'
-
-autodoc_inherit_docstrings = False
-
-# -- Options for HTML output -------------------------------------------------
-
-# The theme to use for HTML and HTML Help pages. See the documentation for
-# a list of builtin themes.
-#
-html_theme = 'sphinx_rtd_theme'
-
-import sphinx_rtd_theme
-layout_target = os.path.join(os.path.dirname(sphinx_rtd_theme.__file__), 'layout.html')
-layout_src = '../../../../resource/_static/layout.html'
-if os.path.exists(layout_target):
- os.remove(layout_target)
-shutil.copy(layout_src, layout_target)
-
-html_search_language = 'en'
-
-# Example configuration for intersphinx: refer to the Python standard library.
-intersphinx_mapping = {
- 'python': ('https://docs.python.org/', '../../../../resource/python_objects.inv'),
- 'numpy': ('https://docs.scipy.org/doc/numpy/', '../../../../resource/numpy_objects.inv'),
-}
-
-# overwriteautosummary_generate add view source for api and more autosummary class availably.
-with open('../_ext/overwriteautosummary_generate.txt', 'r', encoding="utf8") as f:
- exec(f.read(), g.__dict__)
-
-# Modify default signatures for autodoc.
-autodoc_source_path = os.path.abspath(sphinx_autodoc.__file__)
-autodoc_source_re = re.compile(r'stringify_signature\(.*?\)')
-get_param_func_str = r"""\
-import re
-import inspect as inspect_
-
-def get_param_func(func):
- try:
- source_code = inspect_.getsource(func)
- if func.__doc__:
- source_code = source_code.replace(func.__doc__, '')
- all_params_str = re.findall(r"def [\w_\d\-]+\(([\S\s]*?)(\):|\) ->.*?:)", source_code)
- all_params = re.sub("(self|cls)(,|, )?", '', all_params_str[0][0].replace("\n", "").replace("'", "\""))
- return all_params
- except:
- return ''
-
-def get_obj(obj):
- if isinstance(obj, type):
- return obj.__init__
-
- return obj
-"""
-
-with open(autodoc_source_path, "r+", encoding="utf8") as f:
- code_str = f.read()
- code_str = autodoc_source_re.sub('"(" + get_param_func(get_obj(self.object)) + ")"', code_str, count=0)
- exec(get_param_func_str, sphinx_autodoc.__dict__)
- exec(code_str, sphinx_autodoc.__dict__)
-
-import mindspore_federated
-
-# Copy source files of en python api from mindspore repository.
-src_dir_en = os.path.join(os.getenv("MF_PATH"), 'docs/api/api_python_en')
-present_path = os.path.dirname(__file__)
-
-for i in os.listdir(src_dir_en):
- if os.path.isfile(os.path.join(src_dir_en,i)):
- if os.path.exists('./'+i):
- os.remove('./'+i)
- shutil.copy(os.path.join(src_dir_en,i),'./'+i)
- else:
- if os.path.exists('./'+i):
- shutil.rmtree('./'+i)
- shutil.copytree(os.path.join(src_dir_en,i),'./'+i)
-
-# get params for add view source
-import json
-
-if os.path.exists('../../../../tools/generate_html/version.json'):
- with open('../../../../tools/generate_html/version.json', 'r+', encoding='utf-8') as f:
- version_inf = json.load(f)
-elif os.path.exists('../../../../tools/generate_html/daily_dev.json'):
- with open('../../../../tools/generate_html/daily_dev.json', 'r+', encoding='utf-8') as f:
- version_inf = json.load(f)
-elif os.path.exists('../../../../tools/generate_html/daily.json'):
- with open('../../../../tools/generate_html/daily.json', 'r+', encoding='utf-8') as f:
- version_inf = json.load(f)
-
-if os.getenv("MF_PATH").split('/')[-1]:
- copy_repo = os.getenv("MF_PATH").split('/')[-1]
-else:
- copy_repo = os.getenv("MF_PATH").split('/')[-2]
-
-branch = [version_inf[i]['branch'] for i in range(len(version_inf)) if version_inf[i]['name'] == copy_repo][0]
-docs_branch = [version_inf[i]['branch'] for i in range(len(version_inf)) if version_inf[i]['name'] == 'tutorials'][0]
-cst_module_name = 'mindspore_federated'
-repo_whl = 'mindspore_federated'
-giturl = 'https://gitee.com/mindspore/'
-
-sys.path.append(os.path.abspath('../../../../resource/sphinx_ext'))
-# import anchor_mod
-import nbsphinx_mod
-
-sys.path.append(os.path.abspath('../../../../resource/search'))
-import search_code
-
-
-sys.path.append(os.path.abspath('../../../../resource/custom_directives'))
-from custom_directives import IncludeCodeDirective
-
-def setup(app):
- app.add_directive('includecode', IncludeCodeDirective)
- app.add_config_value('docs_branch', '', True)
- app.add_config_value('branch', '', True)
- app.add_config_value('cst_module_name', '', True)
- app.add_config_value('copy_repo', '', True)
- app.add_config_value('giturl', '', True)
- app.add_config_value('repo_whl', '', True)
-
-src_release = os.path.join(os.getenv("MF_PATH"), 'RELEASE.md')
-des_release = "./RELEASE.md"
-with open(src_release, "r", encoding="utf-8") as f:
- data = f.read()
-if len(re.findall("\n## (.*?)\n",data)) > 1:
- content = re.findall("(## [\s\S\n]*?)\n## ", data)
-else:
- content = re.findall("(## [\s\S\n]*)", data)
-#result = content[0].replace('# MindSpore', '#', 1)
-with open(des_release, "w", encoding="utf-8") as p:
- p.write("# Release Notes"+"\n\n")
- p.write(content[0])
\ No newline at end of file
diff --git a/docs/federated/docs/source_en/cross_device.rst b/docs/federated/docs/source_en/cross_device.rst
deleted file mode 100644
index fbbbf5f34ac67b7fa855b672393e32d67755a79b..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/cross_device.rst
+++ /dev/null
@@ -1,17 +0,0 @@
-Device-side Client
-======================
-
-.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg
- :target: https://gitee.com/mindspore/docs/blob/master/docs/federated/docs/source_en/cross_device.rst
- :alt: View Source on Gitee
-
-.. toctree::
- :maxdepth: 1
-
- java_api_callback
- java_api_client
- java_api_clientmanager
- java_api_dataset
- java_api_flparameter
- java_api_syncfljob
- interface_description_federated_client
diff --git a/docs/federated/docs/source_en/data_join.md b/docs/federated/docs/source_en/data_join.md
deleted file mode 100644
index c76aab81230f589794fb91f515879802457cdafd..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/data_join.md
+++ /dev/null
@@ -1,241 +0,0 @@
-# Vertical Federated Learning Data Access
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/federated/docs/source_en/data_join.md)
-
-Unlike horizontal federated learning, two participants (leader and follower) have the same sample space for training or inference in vertical federated learning. Therefore, the data intersection must be done collaboratively before both parties in vertical federated learning initiate training or inference. Both parties must read their respective original data and extract the ID (unique identifier of each data, and none of them is the same) corresponding to each data for intersection (i.e., finding the intersection). Then, both parties obtain features or tags from the original data based on the intersected IDs. Finally, each side exports the persistence file and reads the data in the reordering manner before subsequent training or inference.
-
-## Overall Process
-
-Data access can be divided into two parts: data export and data read.
-
-### Exporting Data
-
-The MindSpore Federated vertical federated learning data export process framework is shown in Figure 1:
-
-
-
-Fig. 1 Vertical Federated Learning Data Export Process Framework Diagram
-
-In the data export process, Leader Worker and Follower Worker are the two participants in the vertical federated learning. The Leader Worker is resident and keeps a listening ear on the Follower Worker, who can enter the data access process at any moment.
-
-After the Leader Worker receives a registration request from the Follower Worker, it checks the registration content. If the registration is successful, the task-related hyperparameters (PSI-related hyperparameters, bucketing rules, ID field names, etc.) are sent to the Follower Worker.
-
-The Leader Worker and Follower Worker read their respective raw data, extract the list of IDs from their raw data and implement bucketing.
-
-Each bucket of Leader Worker and Follower Worker initiates the privacy intersection method to obtain the ID intersections of the two parties.
-
-Finally, the two parties extract the corresponding data from the original data based on the ID intersections and export it to a file in MindRecord format.
-
-### Reading Data
-
-Vertical federated requires that both participants have the same value and order of data IDs for each batch of training or inference. MindSpore Federated ensures that the data is read in the same order by using the same random seed and by using dictionary sorting on the exported file sets when both parties read their respective data.
-
-## An Example for Quick Experience
-
-### Sample Data Preparation
-
-To use the data access method, the original data needs to be prepared first. The user can use [random data generation script](https://gitee.com/mindspore/federated/blob/master/tests/st/data_join/generate_random_data.py) to generate forged data for each participant as a sample.
-
-```shell
-python generate_random_data.py \
- --seed=0 \
- --total_output_path=vfl/input/total_data.csv \
- --intersection_output_path=vfl/input/intersection_data.csv \
- --leader_output_path=vfl/input/leader_data_*.csv \
- --follower_output_path=vfl/input/follower_data_*.csv \
- --leader_file_num=4 \
- --follower_file_num=2 \
- --leader_data_num=300 \
- --follower_data_num=200 \
- --overlap_num=100 \
- --id_len=20 \
- --feature_num=30
-```
-
-The user can set the hyperparameter according to the actual situation:
-
-| Hyperparameter names | Hyperparameter description |
-| -------------------- | ------------------------------------------------------------ |
-| seed | Random seed, int type. |
-| total_output_path | The output path of all data, str type. |
-| intersection_output_path | The output path of intersection data, str type. |
-| leader_output_path | The export path of the leader data. If the configuration includes the `*`, the `*` will be replaced by the serial number of 0, 1, 2 ...... in order when exporting multiple files. str type. |
-| follower_output_path | The export path of the follower data. If the configuration includes the `*`, the `*` will be replaced by the serial number of 0, 1, 2 ...... in order when exporting multiple files. str type. |
-| leader_file_num | The number of output files for leader data. int type. |
-| follower_file_num | The number of output files for follower data. int type. |
-| leader_data_num | The total number of leader data. int type. |
-| follower_data_num | The total number of follower data. int type. |
-| overlap_num | The total amount of data that overlaps between leader and follower data. int type. |
-| id_len | The data ID is a string type. The hyperparameter is the length of the string. int type. |
-| feature_num | The number of columns of the exported data |
-
-Multiple csv files are generated after running the data preparation:
-
-```text
-follower_data_0.csv
-follower_data_1.csv
-intersection_data.csv
-leader_data_0.csv
-leader_data_1.csv
-leader_data_2.csv
-leader_data_3.csv
-```
-
-### Sample of Data Export
-
-Users can use [script of finding data intersections](https://gitee.com/mindspore/federated/blob/master/tests/st/data_join/run_data_join.py) to implement data intersections between two parties and export it to MindRecord format file. The users need to start Leader and Follower processes separately.
-
-Start Leader:
-
-```shell
-python run_data_join.py \
- --role="leader" \
- --main_table_files="vfl/input/leader/" \
- --output_dir="vfl/output/leader/" \
- --data_schema_path="vfl/leader_schema.yaml" \
- --server_name=leader_node \
- --http_server_address="127.0.0.1:1086" \
- --remote_server_name=follower_node \
- --remote_server_address="127.0.0.1:1087" \
- --primary_key="oaid" \
- --bucket_num=5 \
- --store_type="csv" \
- --shard_num=1 \
- --join_type="psi" \
- --thread_num=0
-```
-
-Start Follower:
-
-```shell
-python run_data_join.py \
- --role="follower" \
- --main_table_files="vfl/input/follower/" \
- --output_dir="vfl/output/follower/" \
- --data_schema_path="vfl/follower_schema.yaml" \
- --server_name=follower_node \
- --http_server_address="127.0.0.1:1087" \
- --remote_server_name=leader_node \
- --remote_server_address="127.0.0.1:1086" \
- --store_type="csv" \
- --thread_num=0
-```
-
-The user can set the hyperparameter according to the actual situation.
-
-| Hyperparameter names | Hyperparameter description |
-| ------------------- | ------------------------------------------------------- |
-| role | Role types of the worker. str type. Including: "leader", "follower". |
-| main_table_files | The path of raw data, configure either single or multiple file paths, data directory paths, list or str types |
-| output_dir | The directory path of the exported MindRecord related files, str type. |
-| data_schema_path | The path of the super reference file to be configured during export, str type. |
-| server_name |Name of local http server that used for communication, str type. |
-| http_server_address | Local IP and port address, str type. |
-| remote_server_name | Name of remote http server that used for communication, str type. |
-| remote_server_address | Peer IP and port address, str type. |
-| primary_key (Follower does not need to be configured) | The name of data ID, str type. |
-| bucket_num (Follower does not need to be configured) | Find the number of sub-buckets when intersecting and exporting, int type. |
-| store_type | Raw data storage type, str type. Including: "csv". |
-| shard_num (Follower does not need to be configured) | The number of files exported from a single bucket, int type. |
-| join_type (Follower does not need to be configured) | Algorithm of intersection finding, str type. Including: "psi". |
-| thread_num | Calculate the number of threads required when using the PSI intersection algorithm, int type. |
-
-In the above sample, the files corresponding data_schema_path can be referred to the corresponding files configuration of [leader_schema.yaml](https://gitee.com/mindspore/federated/blob/master/tests/st/data_join/vfl/leader_schema.yaml) and [follower_schema.yaml](https://gitee.com/mindspore/federated/blob/master/tests/st/data_join/vfl/follower_schema.yaml). The user needs to provide the column names and types of the data to be exported in this file.
-
-After running the data export, generate multiple MindRecord related files.
-
-```text
-mindrecord_0
-mindrecord_0.db
-mindrecord_1
-mindrecord_1.db
-mindrecord_2
-mindrecord_2.db
-mindrecord_3
-mindrecord_3.db
-mindrecord_4
-mindrecord_4.db
-```
-
-### Sample of Data Reading
-
-The user can use the [script of reading data](https://gitee.com/mindspore/federated/blob/master/tests/st/data_join/load_joined_data.py) to implement data reading after intersection.
-
-```shell
-python load_joined_data.py \
- --seed=0 \
- --input_dir=vfl/output/leader/ \
- --shuffle=True
-```
-
-The user can set the hyperparameter according to the actual situation.
-
-| Hyperparameter names | Hyperparameter description |
-| --------- | ----------------------------------------- |
-| seed | Random seed. int type. |
-| input_dir | The directory of the input MindRecord related files, str type. |
-| shuffle | Whether the data order needs to be changed, bool type. |
-
-If the intersection result is correct, when each of the two parties reads the data, the OAID order of each data of the two parties is the same, while the data of the other columns in each data can be different values. Print the intersection data after running the data read:
-
-```text
-Leader data export results:
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'uMbgxIMMwWhMGrVMVtM7')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'IwoGP08kWVtT4WHL2PLu')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'MSRe6mURtxgyEgWzDn0b')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'y7X0WcMKnTLrhxVcWfGF')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'DicKRIVvbOYSiv63TvcL')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'TCHgtynOhH3z11QYemsH')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'OWmhgIfC3k8UTteGUhni')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'NTV3qEYXBHqKBWyHGc7s')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'wuinSeN1bzYgXy4XmSlR')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'SSsCU0Pb46XGzUIa3Erg')}
-……
-
-Follower data export results:
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'uMbgxIMMwWhMGrVMVtM7')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'IwoGP08kWVtT4WHL2PLu')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'MSRe6mURtxgyEgWzDn0b')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'y7X0WcMKnTLrhxVcWfGF')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'DicKRIVvbOYSiv63TvcL')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'TCHgtynOhH3z11QYemsH')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'OWmhgIfC3k8UTteGUhni')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'NTV3qEYXBHqKBWyHGc7s')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'wuinSeN1bzYgXy4XmSlR')}
-{……, 'oaid': Tensor(shape=[], dtype=String, value= 'SSsCU0Pb46XGzUIa3Erg')}
-……
-```
-
-## An Example for Deep Experience
-
-For detailed API documentation for the following code, see [Data Access Documentation](https://www.mindspore.cn/federated/docs/en/master/data_join/data_join.html).
-
-### Data Export
-
-The user can implement data join and MindRecord related files export by using the encapsulated interface and yaml file in the following way:
-
-```python
-from mindspore_federated import FLDataWorker
-from mindspore_federated.common.config import get_config
-
-
-if __name__ == '__main__':
- current_dir = os.path.dirname(os.path.abspath(__file__))
- args = get_config(os.path.join(current_dir, "vfl/vfl_data_join_config.yaml"))
- dict_cfg = args.__dict__
-
- worker = FLDataWorker(config=dict_cfg)
- worker.do_worker()
-```
-
-### Data Reading
-
-The user can implement data in exported MindRecord related files reading by using the encapsulated interface in the following way:
-
-```python
-from mindspore_federated.data_join import load_mindrecord
-
-
-if __name__ == "__main__":
- dataset = load_mindrecord(input_dir="vfl/output/leader/", shuffle=True, seed=0)
-```
diff --git a/docs/federated/docs/source_en/deploy_federated_client.md b/docs/federated/docs/source_en/deploy_federated_client.md
deleted file mode 100644
index 479340ad31695118282d8acfe69108f925100d08..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/deploy_federated_client.md
+++ /dev/null
@@ -1,202 +0,0 @@
-# Horizontal Federated Device-side Deployment
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/federated/docs/source_en/deploy_federated_client.md)
-
-This document describes how to compile and deploy Federated-Client.
-
-## Linux Compilation Guidance
-
-### System Environment and Third-party Dependencies
-
-This section describes how to complete the device-side compilation of MindSpore federated learning. Currently, the federated learning device-side only provides compilation guidance on Linux, and other systems are not supported. The following table lists the system environment and third-party dependencies required for compilation.
-
-| Software Name | Version | Functions |
-|-----------------------| ------------ | ------------ |
-| Ubuntu | 18.04.02LTS | Compiling and running MindSpore operating system |
-| [GCC](#installing-gcc) | Between 7.3.0 to 9.4.0 | C++ compiler for compiling MindSpore |
-| [git](#installing-git) | - | Source code management tools used by MindSpore |
-| [CMake](#installing-cmake) | 3.18.3 and above | Compiling and building MindSpore tools |
-| [Gradle](#installing-gradle) | 6.6.1 | JVM-based building tools |
-| [Maven](#installing-maven) | 3.3.1 and above | Tools for managing and building Java projects |
-| [OpenJDK](#installing-openjdk) | Between 1.8 to 1.15 | Tools for managing and building Java projects |
-
-#### Installing GCC
-
-Install GCC with the following command.
-
-```bash
-sudo apt-get install gcc-7 git -y
-```
-
-To install a higher version of GCC, use the following command to install GCC 8.
-
-```bash
-sudo apt-get install gcc-8 -y
-```
-
-Or install GCC 9.
-
-```bash
-sudo apt-get install software-properties-common -y
-sudo add-apt-repository ppa:ubuntu-toolchain-r/test
-sudo apt-get update
-sudo apt-get install gcc-9 -y
-```
-
-#### Installing git
-
-Install git with the following command.
-
-```bash
-sudo apt-get install git -y
-```
-
-#### Installing Cmake
-
-Install [CMake](https://cmake.org/) with the following command.
-
-```bash
-wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | sudo apt-key add -
-sudo apt-add-repository "deb https://apt.kitware.com/ubuntu/ $(lsb_release -cs) main"
-sudo apt-get install cmake -y
-```
-
-#### Installing Gradle
-
-Install [Gradle](https://gradle.org/releases/) with the following command.
-
-```bash
-# Download the corresponding zip package and unzip it.
-# Configure environment variables:
- export GRADLE_HOME=GRADLE path
- export GRADLE_USER_HOME=GRADLE path
-# Add the bin directory to the PATH:
- export PATH=${GRADLE_HOME}/bin:$PATH
-```
-
-#### Installing Maven
-
-Install [Maven](https://archive.apache.org/dist/maven/maven-3/) with the following command.
-
-```bash
-# Download the corresponding zip package and unzip it.
-# Configure environment variables:
- export MAVEN_HOME=MAVEN path
-# Add the bin directory to the PATH:
- export PATH=${MAVEN_HOME}/bin:$PATH
-```
-
-#### Installing OpenJDK
-
-Install [OpenJDK](https://jdk.java.net/archive/) with the following command.
-
-```bash
-# Download the corresponding zip package and unzip it.
-# Configure environment variables:
- export JAVA_HOME=JDK path
-# Add the bin directory to the PATH:
- export PATH=${JAVA_HOME}/bin:$PATH
-```
-
-### Verifying Installation
-
-Verify that the installation in [System environment and third-party dependencies](#system-environment-and-third-party-dependencies) is successful.
-
-```text
-Open a command window and enter: gcc --version
-The following output identifies a successful installation:
- gcc version version number
-
-Open a command window and enter: git --version
-The following output identifies a successful installation:
- git version version number
-
-Open a command window and enter: cmake --version
-The following output identifies a successful installation:
- cmake version version number
-
-Open a command window and enter: gradle --version
-The following output identifies a successful installation:
- Gradle version number
-
-Open a command window and enter: mvn --version
-The following output identifies a successful installation:
- Apache Maven version number
-
-Open a command window and enter: java --version
-The following output identifies a successful installation:
- openjdk version version number
-
-```
-
-### Compilation Options
-
-The `cli_build.sh` script in the federated learning device_client directory is used for compilation on the federated learning device-side.
-
-#### Instructions for Using cli_build.sh Parameters
-
-| Parameters | Parameter Description | Value Range | Default Values |
-| ---- | ------------------------ | -------- | ------------ |
-| -p | the download path of dependency external packages | string | third |
-| -c | whether to reuse dependency packages previously downloaded | on and off | on |
-
-### Compilation Examples
-
-1. First, you need to download the source code from the gitee code repository before you can compile it.
-
- ```bash
- git clone https://gitee.com/mindspore/federated.git ./
- ```
-
-2. Go to the mindspore_federated/device_client directory and execute the following command:
-
- ```bash
- bash cli_build.sh
- ```
-
-3. Since the end-side framework and the model are decoupled, the x86 architecture package we provide, mindspore-lite-{version}-linux-x64.tar.gz, does not contain model-related scripts, so the user needs to generate the jar package corresponding to the model scripts. The jar package corresponding to the model scripts we provide can be obtained in the following way:
-
- ```bash
- cd federated/example/quick_start_flclient
- bash build.sh -r mindspore-lite-java-flclient.jar # After -r, you need to give the absolute path to the latest x86 architecture package (generated in Step 2, federated/mindspore_federated/device_client/build/libs/jarX86/mindspore-lite-java-flclient.jar)
- ```
-
-After running the above command, the path of generated jar package is federated/example/quick_start_flclient/target/quick_start_flclient.jar.
-
-### Building Dependency Environment
-
-1. After extracting the file `federated/mindspore_federated/device_client/third/mindspore-lite-{version}-linux-x64.tar.gz`, the obtained directory structure is as follows(files that are not used in federated learning are not displayed here):
-
- ```sh
- mindspore-lite-{version}-linux-x64
- ├── tools
- └── runtime
- ├── include # Header files of training framework
- ├── lib # Training framework library
- │ ├── libminddata-lite.a # Static library files for image processing
- │ ├── libminddata-lite.so # Dynamic library files for image processing
- │ ├── libmindspore-lite-jni.so # jni dynamic library relied by MindSpore Lite inference framework
- │ ├── libmindspore-lite-train.a # Static library relied by MindSpore Lite training framework
- │ ├── libmindspore-lite-train.so # Dynamic library relied by MindSpore Lite training framework
- │ ├── libmindspore-lite-train-jni.so # jni dynamic library relied by MindSpore Lite training framework
- │ ├── libmindspore-lite.a # Static library relied by MindSpore Lite inference framework
- │ ├── libmindspore-lite.so # Dynamic library relied by MindSpore Lite inference framework
- │ ├── mindspore-lite-java.jar # MindSpore Lite training framework jar package
- └── third_party
- ├── glog
- │└── libmindspore_glog.so.0 # Dynamic library files of glog
- └── libjpeg-turbo
- └── lib
- ├── libjpeg.so.62 # Dynamic library files for image processing
- └── libturbojpeg.so.0 # Dynamic library files for image processing
- ```
-
-2. Put the so files relied by federated learning in paths `mindspore-lite-{version}-linux-x64/runtime/lib/`, `mindspore-lite-{version}-linux-x64/runtime/third_party/glog/` and `mindspore-lite-{version}-linux-x64/runtime/third_party/libjpeg-turbo/lib/` in a folder, e.g. `/resource/x86libs/`. Then set the environment variables in x86 (absolute paths need to be provided below):
-
- ```sh
- export LD_LIBRARY_PATH=/resource/x86libs/:$LD_LIBRARY_PATH
- ```
-
-3. After setting up the dependency environment, you can simulate starting multiple clients in the x86 environment for federated learning by referring to the application practice tutorial [Implementing an end-cloud federation for image classification application (x86)](https://www.mindspore.cn/federated/docs/en/master/image_classification_application.html).
-
-
diff --git a/docs/federated/docs/source_en/deploy_federated_server.md b/docs/federated/docs/source_en/deploy_federated_server.md
deleted file mode 100644
index 12bccc355abeb63f2b625565be03306cb25e3ba8..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/deploy_federated_server.md
+++ /dev/null
@@ -1,317 +0,0 @@
-# Horizontal Federated Cloud-based Deployment
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/federated/docs/source_en/deploy_federated_server.md)
-
-The following uses LeNet as an example to describe how to use MindSpore Federated to deploy a horizontal federated learning cluster.
-
-The following figure shows the physical architecture of the MindSpore Federated Learning (FL) Server cluster:
-
-
-
-As shown in the preceding figure, in the horizontal federated learning cloud cluster, there are three MindSpore process roles: `Federated Learning Scheduler`, `Federated Learning Server` and `Federated Learning Worker`:
-
-- Federated Learning Scheduler
-
- `Scheduler` provides the following functions:
-
- 1. Cluster networking assistance: During cluster initialization, the `Scheduler` collects server information and ensures cluster consistency.
- 2. Open management plane: You can manage clusters through the `RESTful` APIs.
-
- In a federated learning task, there is only one `Scheduler`, which communicates with the `Server` using the TCP proprietary protocol.
-
-- Federated Learning Server
-
- `Server` executes federated learning tasks, receives and parses data from devices, and provides capabilities such as secure aggregation, time-limited communication, and model storage. In a federated learning task, users can configure multiple `Servers` which communicate with each other through the TCP proprietary protocol and open HTTP ports for device-side connection.
-
- In the MindSpore federated learning framework, `Server` also supports auto scaling and disaster recovery, and can dynamically schedule hardware resources without interrupting training tasks.
-
-- Federated Learning Worker
-
- `Worker` is an accessory module for executing the federated learning task, which is used for supervised retraining of the model in the Server, and then the trained model is distributed to the Server. In a federated learning task, there can be more than one (user configurable) of `Worker`, and the communication between `Worker` and `Server` is performed via TCP protocol.
-
-`Scheduler` and `Server` must be deployed on a server or container with a single NIC and in the same network segment. MindSpore automatically obtains the first available IP address as the `Server` IP address.
-
-> The servers will verify the timestamp carried by the clients. It is necessary to eunsure the servers are periodically time synchronized to avoid a large time offset.
-
-## Preparations
-
-> Recommend to create a virtual environment for the following operations with [Anaconda](https://www.anaconda.com/).
-
-### Installing MindSpore
-
-The MindSpore horizontal federated learning cloud cluster supports deployment on x86 CPU and GPU CUDA hardware platforms. Run commands provided by the [MindSpore Installation Guide](https://www.mindspore.cn/install) to install the latest MindSpore.
-
-### Installing MindSpore Federated
-
-Compile and install with [source code](https://gitee.com/mindspore/federated).
-
-```shell
-git clone https://gitee.com/mindspore/federated.git -b master
-cd federated
-bash build.sh
-```
-
-For `bash build.sh`, compilation can be accelerated by the `-jn` option, e.g. `-j16`. The third-party dependencies can be downloaded from gitee instead of github by the `-S on` option.
-
-After compilation, find the whl installation package of Federated in the `build/package/` directory to install:
-
-```bash
-pip install mindspore_federated-{version}-{python_version}-linux_{arch}.whl
-```
-
-### Verifying Installation
-
-Execute the following command to verify the installation result. The installation is successful if no error is reported when importing Python modules.
-
-```python
-from mindspore_federated import FLServerJob
-```
-
-### Installing and Starting Redis Server
-
-Federated Learning relies on [Redis Server](https://gitee.com/link?target=https%3A%2F%2Fredis.io%2F) as the cached data middleware by default. To run the Federated Learning service, a Redis server needs to be installed and run.
-
-> User must check the security of the Redis to be used. Some versions may have security vulnerabilities.
-
-Install Redis server:
-
-```bash
-sudo apt-get install redis
-```
-
-Run the Redis server and the number of configuration side is 23456:
-
-```bash
-redis-server --port 23456 --save ""
-```
-
-## Starting a Cluster
-
-1. [examples](https://gitee.com/mindspore/federated/tree/master/example/cross_device_lenet_femnist/).
-
- ```bash
- cd example/cross_device_lenet_femnist
- ```
-
-2. Modify the yaml configuration file according to the actual running: `default_yaml_config.yaml`. [sample configuration of Lenet](https://gitee.com/mindspore/federated/blob/master/example/cross_device_lenet_femnist/yamls/lenet/default_yaml_config.yaml) is as follows:
-
- ```yaml
- fl_name: Lenet
- fl_iteration_num: 25
- server_mode: FEDERATED_LEARNING
- enable_ssl: False
-
- distributed_cache:
- type: redis
- address: 127.0.0.1:23456 # ip:port of redis actual machine
- plugin_lib_path: ""
-
- round:
- start_fl_job_threshold: 2
- start_fl_job_time_window: 30000
- update_model_ratio: 1.0
- update_model_time_window: 30000
- global_iteration_time_window: 60000
-
- summary:
- metrics_file: "metrics.json"
- failure_event_file: "event.txt"
- continuous_failure_times: 10
- data_rate_dir: ".."
- participation_time_level: "5,15"
-
- unsupervised:
- cluster_client_num: 1000
- eval_type: SILHOUETTE_SCORE
-
- encrypt:
- encrypt_train_type: NOT_ENCRYPT
- pw_encrypt:
- share_secrets_ratio: 1.0
- cipher_time_window: 3000
- reconstruct_secrets_threshold: 1
- dp_encrypt:
- dp_eps: 50.0
- dp_delta: 0.01
- dp_norm_clip: 1.0
- signds:
- sign_k: 0.01
- sign_eps: 100
- sign_thr_ratio: 0.6
- sign_global_lr: 0.1
- sign_dim_out: 0
-
- compression:
- upload_compress_type: NO_COMPRESS
- upload_sparse_rate: 0.4
- download_compress_type: NO_COMPRESS
-
- ssl:
- # when ssl_config is set
- # for tcp/http server
- server_cert_path: "server.p12"
- # for tcp client
- client_cert_path: "client.p12"
- # common
- ca_cert_path: "ca.crt"
- crl_path: ""
- cipher_list: "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-PSK-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-CCM:ECDHE-ECDSA-AES256-CCM:ECDHE-ECDSA-CHACHA20-POLY1305"
- cert_expire_warning_time_in_day: 90
-
- client_verify:
- pki_verify: false
- root_first_ca_path: ""
- root_second_ca_path: ""
- equip_crl_path: ""
- replay_attack_time_diff: 600000
-
- client:
- http_url_prefix: ""
- client_epoch_num: 20
- client_batch_size: 32
- client_learning_rate: 0.01
- connection_num: 10000
-
- ```
-
-3. Prepare the model file and start it in the following way: weight-based start. You need to provide the corresponding model weights.
-
- Obtain lenet model weight:
-
- ```bash
- wget https://ms-release.obs.cn-north-4.myhuaweicloud.com/ms-dependencies/Lenet.ckpt
- ```
-
-4. Run Scheduler, and the management side address is `127.0.0.1:11202` by default.
-
- ```python
- python run_sched.py \
- --yaml_config="yamls/lenet.yaml" \
- --scheduler_manage_address="10.*.*.*:18019"
- ```
-
-5. Run Server, and start one Server and the HTTP server address is `127.0.0.1:6666` by default.
-
- ```python
- python run_server.py \
- --yaml_config="yamls/lenet.yaml" \
- --tcp_server_ip="10.*.*.*" \
- --checkpoint_dir="fl_ckpt" \
- --local_server_num=1 \
- --http_server_address="10.*.*.*:8019"
- ```
-
-6. Stop federated learning. The current version of the federated learning cluster is a resident process, and the `finish_cloud.py` script can be executed to terminate the federated learning service. The example of executing the command is as follows, where `redis_port` is passed with the same parameters as when starting redis, representing stopping the cluster corresponding to this `Scheduler`.
-
- ```python
- python finish_cloud.py --redis_port=23456
- ```
-
- If console prints the following contents:
-
- ```text
- killed $PID1
- killed $PID2
- killed $PID3
- killed $PID4
- killed $PID5
- killed $PID6
- killed $PID7
- killed $PID8
- ```
-
- it indicates the termination service is successful.
-
-## Auto Scaling
-
-MindSpore federated learning framework supports `Server` auto scaling and provides `RESTful` services externally through the `Scheduler` management port, enabling users to dynamically schedule hardware resources without interrupting training tasks.
-
-The following example describes how to control scale-out and scale-in of cluster through APIs.
-
-### Scale-out
-
-After the cluster starts, enter the machine where the scheduler node is deployed and make a request to the `Scheduler` to query the status and node information. A `RESTful` request can be constructed with the `curl` command.
-
-```sh
-curl -k 'http://10.*.*.*:18015/state'
-```
-
-`Scheduler` will return query results in `json` format.
-
-```json
-{
- "message":"Get cluster state successful.",
- "cluster_state":"CLUSTER_READY",
- "code":0,
- "nodes":[
- {"node_id","{ip}:{port}::{timestamp}::{random}",
- "tcp_address":"{ip}:{port}",
- "role":"SERVER"}
- ]
-}
-```
-
-You need to pull up 3 new `Server` processes and accumulate the `local_server_num` parameter to the number of scale-out, so as to ensure the correctness of the global networking information, i.e. after scale-out, the number of `local_server_num` should be 4. An example of executing the command is as follows:
-
-```sh
-python run_server.py --yaml_config="yamls/lenet.yaml" --tcp_server_ip="10.*.*.*" --checkpoint_dir="fl_ckpt" --local_server_num=4 --http_server_address="10.*.*.*:18015"
-```
-
-This command indicates starting four `Server` nodes and the total number of `Server` is 4.
-
-### Scale-in
-
-Simulate the scale-in directly via kill -9 pid, construct a `RESTful` request with the `curl` command, and query the status, which finds that there is one node_id missing from the cluster to achieve the purpose of scale-in.
-
-```sh
-curl -k \
-'http://10.*.*.*:18015/state'
-```
-
-`Scheduler` returns the query results in `json` format.
-
-```json
-{
- "message":"Get cluster state successful.",
- "cluster_state":"CLUSTER_READY",
- "code":0,
- "nodes":[
- {"node_id","{ip}:{port}::{timestamp}::{random}",
- "tcp_address":"{ip}:{port}",
- "role":"SERVER"},
- {"node_id","worker_fl_{timestamp}::{random}",
- "tcp_address":"",
- "role":"WORKER"},
- {"node_id","worker_fl_{timestamp}::{random}",
- "tcp_address":"",
- "role":"WORKER"}
- ]
-}
-```
-
-> - After scale-out/scale-in of the cluster is successful, the training tasks are automatically resumed without additional intervention.
-
-## Security
-
-MindSpore federated Learning Framework supports SSL security authentication of `Server`. To enable security authentication, you need to add `enable_ssl=True` to the startup command, and the config.json configuration file specified by config_file_path needs to add the following fields:
-
-```json
-{
- "server_cert_path": "server.p12",
- "crl_path": "",
- "client_cert_path": "client.p12",
- "ca_cert_path": "ca.crt",
- "cert_expire_warning_time_in_day": 90,
- "cipher_list": "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK",
- "connection_num":10000
-}
-```
-
-- server_cert_path: The path to the p12 file containing the ciphertext of the certificate and key on the server-side.
-- crl_path: Files of revocation list.
-- client_cert_path: The path to the p12 file containing the ciphertext of the certificate and key on the client-side.
-- ca_cert_path: Root certificate.
-- cipher_list: Cipher suite.
-- cert_expire_warning_time_in_day: Alarm time of certificate expiration.
-
-The key in the p12 file is stored in cipher text.
diff --git a/docs/federated/docs/source_en/deploy_vfl.md b/docs/federated/docs/source_en/deploy_vfl.md
deleted file mode 100644
index 9aeae5961bb201e55ec866cc9431a1ef8309c8dd..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/deploy_vfl.md
+++ /dev/null
@@ -1,69 +0,0 @@
-# Vertical Federated Deployment
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/federated/docs/source_en/deploy_vfl.md)
-
-This document explains how to use and deploy the vertical federated learning framework.
-
-The MindSpore Vertical Federated Learning (VFL) physical architecture is shown in the figure:
-
-
-
-As shown above, there are two participants in the vertical federated interaction: the Leader node and the Follower node, each of which has processes in two roles: `FLDataWorker` and `VFLTrainer`:
-
-- FLDataWorker
-
- The functions of `FLDataWorker` mainly includes:
-
- 1. Dataset intersection: obtains a common user intersection for both vertical federated participants, and supports a privacy dataset intersection protocol that prevents federated learning participants from obtaining ID information outside the intersection.
- 2. Training data generation: After obtaining the intersection ID, the data features are expanded to generate the mindrecord file for training.
- 3. Open management surface: `RESTful` interface is provided to users for cluster management.
-
- In a federated learning task, there is only one `Scheduler`, which communicates with the `Server` through TCP protocol.
-
-- VFLTrainer
-
- `VFLTrainer` is the main body that performs the vertical federated training tasks, and performs the forward and reverse computation after model slicing, Embedding tensor transfer, gradient tensor transfer, and reverse optimizer update. The current version supports single-computer single-card and single-computer multi-card training modes.
-
- In the MindSpore federated learning framework, `Server` also supports elastic scaling and disaster recovery, enabling dynamic provisioning of hardware resources without interruption of training tasks.
-
-`FLDataWorker` and `VFLTrainer` are generally deployed in the same server or container.
-
-## Preparation
-
-> It is recommended to use [Anaconda](https://www.anaconda.com/) to create a virtual environment for the following operations.
-
-### Installing MindSpore
-
-MindSpore vertical federated supports deployment on x86 CPU, GPU CUDA and Ascend hardware platforms. The latest version of MindSpore can be installed by referring to [MindSpore Installation Guide](https://www.mindspore.cn/install).
-
-### Installing MindSpore Federated
-
-Compile and install via [source code](https://gitee.com/mindspore/federated).
-
-```shell
-git clone https://gitee.com/mindspore/federated.git -b master
-cd federated
-bash build.sh
-```
-
-For `bash build.sh`, accelerate compilation through the `-jn` option, e.g. `-j16`, and download third-party dependencies from gitee instead of github by the `-S on` option.
-
-Once compiled, find the Federated whl installation package in the `build/package/` directory to install.
-
-```shell
-pip install mindspore_federated-{version}-{python_version}-linux_{arch}.whl
-```
-
-#### Verifying installation
-
-Execute the following command to verify the installation. The installation is successful if no error is reported when importing Python modules.
-
-```python
-from mindspore_federated import FLServerJob
-```
-
-## Running the Example
-
-A running sample of FLDataWorker can be found in [Vertical federated learning data access](https://www.mindspore.cn/federated/docs/en/master/data_join.html).
-
-A running sample of VFLTrainer can be found in [Vertical federated learning model training - Wide&Deep Recommended Application](https://www.mindspore.cn/federated/docs/en/master/split_wnd_application.html).
diff --git a/docs/federated/docs/source_en/faq.md b/docs/federated/docs/source_en/faq.md
deleted file mode 100644
index 912e7d07d46372092d9db4e179df939b99d1c420..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/faq.md
+++ /dev/null
@@ -1,9 +0,0 @@
-# FAQ
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/federated/docs/source_en/faq.md)
-
-**Q: If the cluster networking is unsuccessful, how to locate the cause?**
-
-A: Please check the server's network conditions, for example, check whether the firewall prohibits port access, please set the firewall to allow port access.
-
-
\ No newline at end of file
diff --git a/docs/federated/docs/source_en/federated_install.md b/docs/federated/docs/source_en/federated_install.md
deleted file mode 100644
index 4a93f9e2293910567913f6606da4027b951eab0e..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/federated_install.md
+++ /dev/null
@@ -1,25 +0,0 @@
-# Obtaining MindSpore Federated
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/federated/docs/source_en/federated_install.md)
-
-Currently, the [MindSpore Federated](https://gitee.com/mindspore/federated) framework code has been built independently, divided into device-side and cloud-side. Its cloud-side capability relies on MindSpore and MindSpore Federated, using MindSpore for cloud-side cluster aggregation training and communication with device-side, so it needs to get MindSpore whl package and MindSpore Federated whl package respectively. The device-side capability relies on MindSpore Lite and MindSpore Federated java packages, where MindSpore Federated java is mainly responsible for data pre-processing, model training and inference by calling MindSpore Lite for, as well as model-related uploads and downloads by using privacy protection mechanisms and the cloud side.
-
-## Obtaining the MindSpore WHL Package
-
-You can use the source code or download the release version to install MindSpore on hardware platforms such as the x86 CPU and GPU CUDA. For details about the installation process, see [Install](https://www.mindspore.cn/install/en) on the MindSpore website.
-
-## Obtaining the MindSpore Lite Java Package
-
-You can use the source code or download the release version. Currently, only the Linux and Android platforms are supported, and only the CPU hardware architecture is supported. For details about the installation process, see [Downloading MindSpore Lite](https://www.mindspore.cn/lite/docs/en/master/use/downloads.html) and [Building MindSpore Lite](https://www.mindspore.cn/lite/docs/en/master/build/build.html).
-
-## Obtaining MindSpore Federated WHL Package
-
-You can use the source code or download the release version to install MindSpore on hardware platforms such as the x86 CPU and GPU CUDA. For details about the installation process, see [Building MindSpore Federated whl](https://www.mindspore.cn/federated/docs/en/master/deploy_federated_server.html).
-
-## Obtaining MindSpore Federated Java Package
-
-You can use the source code or download the release version. Currently, MindSpore Federated Learing supports the Linux and Android platforms. For details about the installation process, see [Building MindSpore Federated java](https://www.mindspore.cn/federated/docs/en/master/deploy_federated_client.html).
-
-## Requirements for Building the Linux Environment
-
-Currently, the source code build is supported only in the Linux environment. For details about the environment requirements, see [MindSpore Source Code Build](https://www.mindspore.cn/install/en) and [MindSpore Lite Source Code Build](https://www.mindspore.cn/lite/docs/en/master/build/build.html).
diff --git a/docs/federated/docs/source_en/horizontal_server.rst b/docs/federated/docs/source_en/horizontal_server.rst
deleted file mode 100644
index a2f1a2d2aa2f7511f4e4401c3148775232110754..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/horizontal_server.rst
+++ /dev/null
@@ -1,12 +0,0 @@
-Federated Server
-================
-
-.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg
- :target: https://gitee.com/mindspore/docs/blob/master/docs/federated/docs/source_en/horizontal_server.rst
- :alt: View Source on Gitee
-
-.. toctree::
- :maxdepth: 1
-
- horizontal/federated_server
- horizontal/federated_server_yaml
\ No newline at end of file
diff --git a/docs/federated/docs/source_en/image_classfication_dataset_process.md b/docs/federated/docs/source_en/image_classfication_dataset_process.md
deleted file mode 100644
index e9578b14a58db46c8ecceb68167a35aa892393cd..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/image_classfication_dataset_process.md
+++ /dev/null
@@ -1,450 +0,0 @@
-# Federated Learning Image Classification Dataset Process
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/federated/docs/source_en/image_classfication_dataset_process.md)
-
-This tutorial uses the federated learning dataset `FEMNIST` in the `leaf` dataset, which contains 62 different categories of handwritten digits and letters (digits 0 to 9, 26 lowercase letters, and 26 uppercase letters) with an image size of `28 x 28` pixels. The dataset contains handwritten digits and letters from 3500 users (up to 3500 clients can be simulated to participate in federated learning). The total data volume is 805,263, the average data volume per user is 226.83, and the variance of the data volume for all users is 88.94.
-
-Refer to [leaf dataset instruction](https://github.com/TalwalkarLab/leaf) to download the dataset.
-
-1. Environmental requirements before downloading the dataset.
-
- ```sh
- numpy==1.16.4
- scipy # conda install scipy
- tensorflow==1.13.1 # pip install tensorflow
- Pillow # pip install Pillow
- matplotlib # pip install matplotlib
- jupyter # conda install jupyter notebook==5.7.8 tornado==4.5.3
- pandas # pip install pandas
- ```
-
-2. Use git to download the official dataset generation script.
-
- ```sh
- git clone https://github.com/TalwalkarLab/leaf.git
- ```
-
- After downloading the project, the directory structure is as follows:
-
- ```sh
- leaf/data/femnist
- ├── data # Used to store the dataset generated by the command
- ├── preprocess # Store the code related to data pre-processing
- ├── preprocess.sh # shell script generated by femnist dataset
- └── README.md # Official dataset download guidance
- ```
-
-3. Taking `femnist` dataset as an example, run the following command to enter the specified path.
-
- ```sh
- cd leaf/data/femnist
- ```
-
-4. Using the command `. /preprocess.sh -s niid --sf 1.0 -k 0 -t sample` generates a dataset containing 3500 users, and the training sets and the test sets are divided in a ratio of 9:1 for each user's data.
-
- The meaning of the parameters in the command can be found in the `leaf/data/femnist/README.md` file.
-
- The directory structure after running is as follows:
-
- ```text
- leaf/data/femnist/35_client_sf1_data/
- ├── all_data # All datasets are mixed together, without distinguishing the training sets and test sets, containing a total of 35 json files, and each json file contains the data of 100 users
- ├── test # The test sets are divided into the training sets and the test sets in a ratio of 9:1 for each user's data, containing a total of 35 json files, and each json file contains the data of 100 users
- ├── train # The training sets are divided into the training sets and the test sets in a ratio of 9:1 for each user's data, containing a total of 35 json files, and each json file contains the data of 100 users
- └── ... # Other documents do not need to use, and details are not described herein
- ```
-
- Each json file contains the following three parts:
-
- - `users`: User list.
- - `num_samples`: The sample number list of each user.
- - `user_data`: A dictionary object with user names as key and their respective data as value. For each user, the data is represented as a list of images, with each image represented as a list of integers of size 784 (obtained by spreading the `28 x 28` image array).
-
- Before rerunning `preprocess.sh`, make sure to delete the `rem_user_data`, `sampled_data`, `test` and `train` subfolders from the data directory.
-
-5. Divide the 35 json files into 3500 json files (each json file represents a user).
-
- The code is as follows:
-
- ```python
- import os
- import json
-
- def mkdir(path):
- if not os.path.exists(path):
- os.mkdir(path)
-
- def partition_json(root_path, new_root_path):
- """
- partition 35 json files to 3500 json file
-
- Each raw .json file is an object with 3 keys:
- 1. 'users', a list of users
- 2. 'num_samples', a list of the number of samples for each user
- 3. 'user_data', an object with user names as keys and their respective data as values; for each user, data is represented as a list of images, with each image represented as a size-784 integer list (flattened from 28 by 28)
-
- Each new .json file is an object with 3 keys:
- 1. 'user_name', the name of user
- 2. 'num_samples', the number of samples for the user
- 3. 'user_data', an dict object with 'x' as keys and their respective data as values; with 'y' as keys and their respective label as values;
-
- Args:
- root_path (str): raw root path of 35 json files
- new_root_path (str): new root path of 3500 json files
- """
- paths = os.listdir(root_path)
- count = 0
- file_num = 0
- for i in paths:
- file_num += 1
- file_path = os.path.join(root_path, i)
- print('======== process ' + str(file_num) + ' file: ' + str(file_path) + '======================')
- with open(file_path, 'r') as load_f:
- load_dict = json.load(load_f)
- users = load_dict['users']
- num_users = len(users)
- num_samples = load_dict['num_samples']
- for j in range(num_users):
- count += 1
- print('---processing user: ' + str(count) + '---')
- cur_out = {'user_name': None, 'num_samples': None, 'user_data': {}}
- cur_user_id = users[j]
- cur_data_num = num_samples[j]
- cur_user_path = os.path.join(new_root_path, cur_user_id + '.json')
- cur_out['user_name'] = cur_user_id
- cur_out['num_samples'] = cur_data_num
- cur_out['user_data'].update(load_dict['user_data'][cur_user_id])
- with open(cur_user_path, 'w') as f:
- json.dump(cur_out, f)
- f = os.listdir(new_root_path)
- print(len(f), ' users have been processed!')
- # partition train json files
- partition_json("leaf/data/femnist/35_client_sf1_data/train", "leaf/data/femnist/3500_client_json/train")
- # partition test json files
- partition_json("leaf/data/femnist/35_client_sf1_data/test", "leaf/data/femnist/3500_client_json/test")
- ```
-
- where `root_path` is `leaf/data/femnist/35_client_sf1_data/{train,test}`. `new_root_path` is set by itself to store the generated 3500 user json files, which need to be processed separately for the training and test folders.
-
- Each of the 3500 newly generated user json files contains the following three parts:
-
- - `user_name`: User name.
- - `num_samples`: The number of user samples
- - `user_data`: A dictionary object with 'x' as key and user data as value; with 'y' as key and the label corresponding to the user data as value.
-
- Print the result as following after running the script, which means a successful run:
-
- ```sh
- ======== process 1 file: /leaf/data/femnist/35_client_sf1_data/train/all_data_16_niid_0_keep_0_train_9.json======================
- ---processing user: 1---
- ---processing user: 2---
- ---processing user: 3---
- ......
- ```
-
-6. Convert a json file to an image file.
-
- Refer to the following code:
-
- ```python
- import os
- import json
- import numpy as np
- from PIL import Image
-
- name_list = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9',
- 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U',
- 'V', 'W', 'X', 'Y', 'Z',
- 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u',
- 'v', 'w', 'x', 'y', 'z'
- ]
-
- def mkdir(path):
- if not os.path.exists(path):
- os.mkdir(path)
-
- def json_2_numpy(img_size, file_path):
- """
- read json file to numpy
- Args:
- img_size (list): contain three elements: the height, width, channel of image
- file_path (str): root path of 3500 json files
- return:
- image_numpy (numpy)
- label_numpy (numpy)
- """
- # open json file
- with open(file_path, 'r') as load_f_train:
- load_dict = json.load(load_f_train)
- num_samples = load_dict['num_samples']
- x = load_dict['user_data']['x']
- y = load_dict['user_data']['y']
- size = (num_samples, img_size[0], img_size[1], img_size[2])
- image_numpy = np.array(x, dtype=np.float32).reshape(size) # mindspore doesn't support float64 and int64
- label_numpy = np.array(y, dtype=np.int32)
- return image_numpy, label_numpy
-
- def json_2_img(json_path, save_path):
- """
- transform single json file to images
-
- Args:
- json_path (str): the path json file
- save_path (str): the root path to save images
-
- """
- data, label = json_2_numpy([28, 28, 1], json_path)
- for i in range(data.shape[0]):
- img = data[i] * 255 # PIL don't support the 0/1 image ,need convert to 0~255 image
- im = Image.fromarray(np.squeeze(img))
- im = im.convert('L')
- img_name = str(label[i]) + '_' + name_list[label[i]] + '_' + str(i) + '.png'
- path1 = os.path.join(save_path, str(label[i]))
- mkdir(path1)
- img_path = os.path.join(path1, img_name)
- im.save(img_path)
- print('-----', i, '-----')
-
- def all_json_2_img(root_path, save_root_path):
- """
- transform json files to images
- Args:
- json_path (str): the root path of 3500 json files
- save_path (str): the root path to save images
- """
- usage = ['train', 'test']
- for i in range(2):
- x = usage[i]
- files_path = os.path.join(root_path, x)
- files = os.listdir(files_path)
-
- for name in files:
- user_name = name.split('.')[0]
- json_path = os.path.join(files_path, name)
- save_path1 = os.path.join(save_root_path, user_name)
- mkdir(save_path1)
- save_path = os.path.join(save_path1, x)
- mkdir(save_path)
- print('=============================' + name + '=======================')
- json_2_img(json_path, save_path)
-
- all_json_2_img("leaf/data/femnist/3500_client_json/", "leaf/data/femnist/3500_client_img/")
- ```
-
- Print the result as following after running the script, which means a successful run:
-
- ```sh
- =============================f0644_19.json=======================
- ----- 0 -----
- ----- 1 -----
- ----- 2 -----
- ......
- ```
-
-7. Since the dataset under some user folders is small, if the number is smaller than the batch size, random expansion is required.
-
- The entire dataset `"leaf/data/femnist/3500_client_img/"` can be checked and expanded by referring to the following code:
-
- ```python
- import os
- import shutil
- from random import choice
-
- def count_dir(path):
- num = 0
- for root, dirs, files in os.walk(path):
- for file in files:
- num += 1
- return num
-
- def get_img_list(path):
- img_path_list = []
- label_list = os.listdir(path)
- for i in range(len(label_list)):
- label = label_list[i]
- imgs_path = os.path.join(path, label)
- imgs_name = os.listdir(imgs_path)
- for j in range(len(imgs_name)):
- img_name = imgs_name[j]
- img_path = os.path.join(imgs_path, img_name)
- img_path_list.append(img_path)
- return img_path_list
-
- def data_aug(data_root_path, batch_size = 32):
- users = os.listdir(data_root_path)
- tags = ["train", "test"]
- aug_users = []
- for i in range(len(users)):
- user = users[i]
- for tag in tags:
- data_path = os.path.join(data_root_path, user, tag)
- num_data = count_dir(data_path)
- if num_data < batch_size:
- aug_users.append(user + "_" + tag)
- print("user: ", user, " ", tag, " data number: ", num_data, " < ", batch_size, " should be aug")
- aug_num = batch_size - num_data
- img_path_list = get_img_list(data_path)
- for j in range(aug_num):
- img_path = choice(img_path_list)
- info = img_path.split(".")
- aug_img_path = info[0] + "_aug_" + str(j) + ".png"
- shutil.copy(img_path, aug_img_path)
- print("[aug", j, "]", "============= copy file:", img_path, "to ->", aug_img_path)
- print("the number of all aug users: " + str(len(aug_users)))
- print("aug user name: ", end=" ")
- for k in range(len(aug_users)):
- print(aug_users[k], end = " ")
-
- if __name__ == "__main__":
- data_root_path = "leaf/data/femnist/3500_client_img/"
- batch_size = 32
- data_aug(data_root_path, batch_size)
- ```
-
-8. Convert the expanded image dataset into a bin file format usable in the Federated Learning Framework.
-
- Refer to the following code:
-
- ```python
- import numpy as np
- import os
- import mindspore.dataset as ds
- import mindspore.dataset.vision as vision
- import mindspore.dataset.transforms as transforms
- import mindspore
-
- def mkdir(path):
- if not os.path.exists(path):
- os.mkdir(path)
-
- def count_id(path):
- files = os.listdir(path)
- ids = {}
- for i in files:
- ids[i] = int(i)
- return ids
-
- def create_dataset_from_folder(data_path, img_size, batch_size=32, repeat_size=1, num_parallel_workers=1, shuffle=False):
- """ create dataset for train or test
- Args:
- data_path: Data path
- batch_size: The number of data records in each group
- repeat_size: The number of replicated data records
- num_parallel_workers: The number of parallel workers
- """
- # define dataset
- ids = count_id(data_path)
- mnist_ds = ds.ImageFolderDataset(dataset_dir=data_path, decode=False, class_indexing=ids)
- # define operation parameters
- resize_height, resize_width = img_size[0], img_size[1] # 32
-
- transform = [
- vision.Decode(True),
- vision.Grayscale(1),
- vision.Resize(size=(resize_height, resize_width)),
- vision.Grayscale(3),
- vision.ToTensor(),
- ]
- compose = transforms.Compose(transform)
-
- # apply map operations on images
- mnist_ds = mnist_ds.map(input_columns="label", operations=transforms.TypeCast(mindspore.int32))
- mnist_ds = mnist_ds.map(input_columns="image", operations=compose)
-
- # apply DatasetOps
- buffer_size = 10000
- if shuffle:
- mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size) # 10000 as in LeNet train script
- mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)
- mnist_ds = mnist_ds.repeat(repeat_size)
- return mnist_ds
-
- def img2bin(root_path, root_save):
- """
- transform images to bin files
-
- Args:
- root_path: the root path of 3500 images files
- root_save: the root path to save bin files
-
- """
-
- use_list = []
- train_batch_num = []
- test_batch_num = []
- mkdir(root_save)
- users = os.listdir(root_path)
- for user in users:
- use_list.append(user)
- user_path = os.path.join(root_path, user)
- train_test = os.listdir(user_path)
- for tag in train_test:
- data_path = os.path.join(user_path, tag)
- dataset = create_dataset_from_folder(data_path, (32, 32, 1), 32)
- batch_num = 0
- img_list = []
- label_list = []
- for data in dataset.create_dict_iterator():
- batch_x_tensor = data['image']
- batch_y_tensor = data['label']
- trans_img = np.transpose(batch_x_tensor.asnumpy(), [0, 2, 3, 1])
- img_list.append(trans_img)
- label_list.append(batch_y_tensor.asnumpy())
- batch_num += 1
-
- if tag == "train":
- train_batch_num.append(batch_num)
- elif tag == "test":
- test_batch_num.append(batch_num)
-
- imgs = np.array(img_list) # (batch_num, 32,3,32,32)
- labels = np.array(label_list)
- path1 = os.path.join(root_save, user)
- mkdir(path1)
- image_path = os.path.join(path1, user + "_" + "bn_" + str(batch_num) + "_" + tag + "_data.bin")
- label_path = os.path.join(path1, user + "_" + "bn_" + str(batch_num) + "_" + tag + "_label.bin")
-
- imgs.tofile(image_path)
- labels.tofile(label_path)
- print("user: " + user + " " + tag + "_batch_num: " + str(batch_num))
- print("total " + str(len(use_list)) + " users finished!")
-
- root_path = "leaf/data/femnist/3500_client_img/"
- root_save = "leaf/data/femnist/3500_clients_bin"
- img2bin(root_path, root_save)
- ```
-
- Print the result as following after running the script, which means a successful run:
-
- ```sh
- user: f0141_43 test_batch_num: 1
- user: f0141_43 train_batch_num: 10
- user: f0137_14 test_batch_num: 1
- user: f0137_14 train_batch_num: 11
- ......
- total 3500 users finished!
- ```
-
-9. Generate `3500_clients_bin` folder containing a total of 3500 user folders with the following directory structure:
-
- ```sh
- leaf/data/femnist/3500_clients_bin
- ├── f0000_14 # User number
- │ ├── f0000_14_bn_10_train_data.bin # The training data of user f0000_14 (The number 10 after bn_ represents the batch number)
- │ ├── f0000_14_bn_10_train_label.bin # Training tag for user f0000_14
- │ ├── f0000_14_bn_1_test_data.bin # Test data of user f0000_14 (the number 1 after bn_ represents batch number)
- │ └── f0000_14_bn_1_test_label.bin # Test tag for user f0000_14
- ├── f0001_41 # User number
- │ ├── f0001_41_bn_11_train_data.bin # The training data of user f0001_41 (The number 11 after bn_ represents the batch number)
- │ ├── f0001_41_bn_11_train_label.bin # Training tag for user f0001_41
- │ ├── f0001_41_bn_1_test_data.bin # Test data of user f0001_41 (the number 1 after bn_ represents batch number)
- │ └── f0001_41_bn_1_test_label.bin # Test tag for user f0001_41
- │ ...
- └── f4099_10 # User number
- ├── f4099_10_bn_4_train_data.bin # The training data of user f4099_10 (the number 4 after bn_ represents the batch number)
- ├── f4099_10_bn_4_train_label.bin # Training tag for user f4099_10
- ├── f4099_10_bn_1_test_data.bin # Test data of user f4099_10 (the number 1 after bn_ represents batch number)
- └── f4099_10_bn_1_test_label.bin # Test tag for user f4099_10
- ```
-
-The `3500_clients_bin` folder generated according to steps 1 to 9 above can be directly used as the input data for the device-cloud federated image classification task.
diff --git a/docs/federated/docs/source_en/image_classification_application.md b/docs/federated/docs/source_en/image_classification_application.md
deleted file mode 100644
index 04a7b00dc093f330a7b333f175cfe99a94255446..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/image_classification_application.md
+++ /dev/null
@@ -1,331 +0,0 @@
-# Implementing an Image Classification Application of Cross-device Federated Learning (x86)
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/federated/docs/source_en/image_classification_application.md)
-
-Federated learning can be divided into cross-silo federated learning and cross-device federated learning according to different participating clients. In the cross-silo federated learning scenario, the clients participating in federated learning are different organizations (for example, medical or financial) or data centers geographically distributed, that is, training models on multiple data islands. The clients participating in the cross-device federated learning scenario are a large number of mobiles or IoT devices. This framework will introduce how to use the network LeNet to implement an image classification application on the MindSpore cross-silo federated framework, and provides related tutorials for simulating to start multi-client participation in federated learning in the x86 environment.
-
-Before you start, check whether MindSpore has been correctly installed. If not, install MindSpore on your computer by referring to [Install](https://www.mindspore.cn/install/en) on the MindSpore website.
-
-## Preparation
-
-We provide [Federated Learning Image Classification Dataset FEMNIST](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/federated/3500_clients_bin.zip) and the [device-side model file](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/models/lenet_train.ms) of the `.ms` format for users to use directly. Users can also refer to the following tutorials to generate the datasets and models based on actual needs.
-
-### Generating a Device-side Model File
-
-1. Define the network and training process.
-
- For the definition of the specific network and training process, please refer to [Beginners Getting Started](https://www.mindspore.cn/tutorials/en/master/beginner/quick_start.html).
-
-2. Export a model as a MindIR file.
-
- The code snippet is as follows:
-
- ```python
- import argparse
- import numpy as np
- import mindspore as ms
- import mindspore.nn as nn
-
- def conv(in_channels, out_channels, kernel_size, stride=1, padding=0):
- """weight initial for conv layer"""
- weight = weight_variable()
- return nn.Conv2d(
- in_channels,
- out_channels,
- kernel_size=kernel_size,
- stride=stride,
- padding=padding,
- weight_init=weight,
- has_bias=False,
- pad_mode="valid",
- )
-
- def fc_with_initialize(input_channels, out_channels):
- """weight initial for fc layer"""
- weight = weight_variable()
- bias = weight_variable()
- return nn.Dense(input_channels, out_channels, weight, bias)
-
- def weight_variable():
- """weight initial"""
- return ms.common.initializer.TruncatedNormal(0.02)
-
- class LeNet5(nn.Cell):
- def __init__(self, num_class=10, channel=3):
- super(LeNet5, self).__init__()
- self.num_class = num_class
- self.conv1 = conv(channel, 6, 5)
- self.conv2 = conv(6, 16, 5)
- self.fc1 = fc_with_initialize(16 * 5 * 5, 120)
- self.fc2 = fc_with_initialize(120, 84)
- self.fc3 = fc_with_initialize(84, self.num_class)
- self.relu = nn.ReLU()
- self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
- self.flatten = nn.Flatten()
-
- def construct(self, x):
- x = self.conv1(x)
- x = self.relu(x)
- x = self.max_pool2d(x)
- x = self.conv2(x)
- x = self.relu(x)
- x = self.max_pool2d(x)
- x = self.flatten(x)
- x = self.fc1(x)
- x = self.relu(x)
- x = self.fc2(x)
- x = self.relu(x)
- x = self.fc3(x)
- return x
-
- parser = argparse.ArgumentParser(description="export mindir for lenet")
- parser.add_argument("--device_target", type=str, default="CPU")
- parser.add_argument("--mindir_path", type=str,
- default="lenet_train.mindir") # the mindir file path of the model to be export
-
- args, _ = parser.parse_known_args()
- device_target = args.device_target
- mindir_path = args.mindir_path
-
- ms.set_context(mode=ms.GRAPH_MODE, device_target=device_target)
-
- if __name__ == "__main__":
- np.random.seed(0)
- network = LeNet5(62)
- criterion = nn.SoftmaxCrossEntropyWithLogits(sparse=False, reduction="mean")
- net_opt = nn.Momentum(network.trainable_params(), 0.01, 0.9)
- net_with_criterion = nn.WithLossCell(network, criterion)
- train_network = nn.TrainOneStepCell(net_with_criterion, net_opt)
- train_network.set_train()
-
- data = ms.Tensor(np.random.rand(32, 3, 32, 32).astype(np.float32))
- label = ms.Tensor(np.random.randint(0, 1, (32, 62)).astype(np.float32))
- ms.export(train_network, data, label, file_name=mindir_path,
- file_format='MINDIR') # Add the export statement to obtain the model file in MindIR format.
- ```
-
- The parameter `--mindir_path` is used to set the path of the generated file in MindIR format.
-
-3. Convert the MindIR file into an .ms file that can be used by the federated learning device-side framework.
-
- For details about model conversion, see [Training Model Conversion Tutorial](https://www.mindspore.cn/lite/docs/en/master/train/converter_train.html).
-
- The following is an example of model conversion:
-
- Assume that the model file to be converted is `lenet_train.mindir`. Run the following command:
-
- ```sh
- ./converter_lite --fmk=MINDIR --trainModel=true --modelFile=lenet_train.mindir --outputFile=lenet_train
- ```
-
- If the conversion is successful, the following information is displayed:
-
- ```sh
- CONVERT RESULT SUCCESS:0
- ```
-
- This indicates that the MindSpore model is successfully converted to the MindSpore device-side model and the new file `lenet_train.ms` is generated. If the conversion fails, the following information is displayed:
-
- ```sh
- CONVERT RESULT FAILED:
- ```
-
- The generated model file in `.ms` format is the model file required by subsequent clients.
-
-## Simulating Multi-client Participation in Federated Learning
-
-### Preparing a Model File for the Client
-
-This example uses lenet on the device-side to simulate the actual network used, where[device-side model file](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/models/lenet_train.ms) in `.ms` format of lenet. As the real scenario where a client contains only one model file in .ms format, in the simulation scenario, multiple copies of the .ms file need to be copied and named according to the `lenet_train{i}.ms` format, where i represents the client number, since the .ms file has been automatically copied for each client in `run_client_x86.py`.
-
-See the copy_ms function in [startup script](https://gitee.com/mindspore/federated/blob/master/example/cross_device_lenet_femnist/simulate_x86/run_client_x86.py) for details.
-
-### Starting the Cloud Side Service
-
-Users can first refer to [cloud-side deployment tutorial](https://www.mindspore.cn/federated/docs/en/master/deploy_federated_server.html) to deploy the cloud-side environment and start the cloud-side service.
-
-### Starting the Client
-
-Before starting the client, please refer to the section [Device-side deployment tutotial](https://www.mindspore.cn/federated/docs/en/master/deploy_federated_client.html) for deployment of device environment.
-
-We provide a reference script [run_client_x86.py](https://gitee.com/mindspore/federated/blob/master/example/cross_device_lenet_femnist/simulate_x86/run_client_x86.py), users can set relevant parameters to start different federated learning interfaces.
-After the cloud-side service is successfully started, the script providing run_client_x86.py is used to call the federated learning framework jar package `mindspore-lite-java-flclient.jar` and the corresponding jar package `quick_start_flclient.jar` of the model script, obtaining in [Compiling package Flow in device-side deployment](https://www.mindspore.cn/federated/docs/en/master/deploy_federated_client.html) to simulate starting multiple clients to participate in the federated learning task.
-
-Taking the LeNet network as an example, some of the input parameters in the `run_client_x86.py` script have the following meanings, and users can set them according to the actual situation:
-
-- `--fl_jar_path`
-
- For setting the federated learning jar package path and obtaining x86 environment federated learning jar package, refer to [Compile package process in device-side deployment](https://www.mindspore.cn/federated/docs/en/master/deploy_federated_client.html).
-
-- `--case_jar_path`
-
- For setting the path of jar package `quick_start_flclient.jar` generated by model script and obtaining the JAR package in the x86 environment, see [Compile package process in device-side deployment](https://www.mindspore.cn/federated/docs/en/master/deploy_federated_client.html).
-
-- `--lite_jar_path`
-
- For setting the path of jar package `mindspore-lite-java.jar` of mindspore lite, which is located in `mindspore-lite-{version}-linux-x64.tar.gz`. For x86 environment federated learning jar package acquisition, see [Compile package process in device-side deployment](https://www.mindspore.cn/federated/docs/en/master/deploy_federated_client.html).
-
-- `--train_data_dir`
-
- The root path of the training dataset in which the LeNet image classification task is stored is the training data.bin file and label.bin file for each client, e.g. `data/femnist/3500_clients_bin/`.
-
-- `--fl_name`
-
- Specifies the package path of model script used by federated learning. We provide two types of model scripts for your reference ([Supervised sentiment classification task](https://gitee.com/mindspore/federated/tree/master/example/quick_start_flclient/src/main/java/com/mindspore/flclient/demo/albert), [Lenet image classification task](https://gitee.com/mindspore/federated/tree/master/example/quick_start_flclient/src/main/java/com/mindspore/flclient/demo/lenet)). For supervised sentiment classification tasks, this parameter can be set to the package path of the provided script file [AlBertClient.java](https://gitee.com/mindspore/federated/blob/master/example/quick_start_flclient/src/main/java/com/mindspore/flclient/demo/albert/AlbertClient.java), like as `com.mindspore.flclient.demo.albert.AlbertClient`. For Lenet image classification tasks, this parameter can be set to the package path of the provided script file [LenetClient.java](https://gitee.com/mindspore/federated/blob/master/example/quick_start_flclient/src/main/java/com/mindspore/flclient/demo/lenet/LenetClient.java), like as `com.mindspore.flclient.demo.lenet.LenetClient`. At the same time, users can refer to these two types of model scripts, define the model script by themselves, and then set the parameter to the package path of the customized model file ModelClient.java (which needs to inherit from the class [Client.java](https://gitee.com/mindspore/federated/blob/master/mindspore_federated/device_client/src/main/java/com/mindspore/flclient/model/Client.java)).
-
-- `--train_model_dir`
-
- Specifies the training model path used for federated learning. The path is the directory where multiple .ms files copied in the preceding tutorial are stored, for example, `ms/lenet`. The path must be an absolute path.
-
-- `--domain_name`
-
- Used to set the url for device-cloud communication. Currently, https and http communication are supported, and the corresponding formats are like as: https://......, http://....... When `if_use_elb` is set to true, the format must be: or , where `127.0.0.1` corresponds to the ip of the machine ip providing cloud-side services (corresponding to the cloud-side parameter `--scheduler_ip`), and `6666` corresponds to the cloud-side parameter `--fl_server_port`.
-
- Note 1: When this parameter is set to `http://......`, it means that HTTP communication is used, and there may be communication security risks.
-
- Note 2: When this parameter is set to `https://......`, it means the use of HTTPS communication. At this time, SSL certificate authentication must be performed, and the certificate path needs to be set by the parameter `-cert_path`.
-
-- `--task`
-
- Specifies the type of the task to be started. `train` indicates that a training task is started. `inference` indicates that multiple data inference tasks are started. `getModel` indicates that the task for obtaining the cloud model is started. Other character strings indicate that the inference task of a single data record is started. The default value is `train`. The initial model file (.ms file) is not trained. Therefore, you are advised to start the training task first. After the training is complete, start the inference task. (Note that the values of client_num in the two startups must be the same to ensure that the model file used by `inference` is the same as that used by `train`.)
-
-- `--batch_size`
-
- Specifies the number of single-step training samples used in federated learning training and inference, that is, batch size. It needs to be consistent with the batch size of the input data of the model.
-
-- `--client_num`
-
- Specifies the number of clients. The value must be the same as that of `start_fl_job_cnt` when the server is started. This parameter is not required in actual scenarios.
-
-If you want to know more about the meaning of other parameters in the `run_client_x86.py` script, you can refer to the comments in the script.
-
-The basic startup instructions of the federated learning interface are as follows:
-
-```sh
- rm -rf client_*\
- && rm -rf ms/* \
- && python3 run_client_x86.py \
- --fl_jar_path="federated/mindspore_federated/device_client/build/libs/jarX86/mindspore-lite-java-flclient.jar" \
- --case_jar_path="federated/example/quick_start_flclient/target/case_jar/quick_start_flclient.jar" \
- --lite_jar_path="federated/mindspore_federated/device_client/third/mindspore-lite-2.0.0-linux-x64/runtime/lib/mindspore-lite-java.jar" \
- --train_data_dir="federated/tests/st/simulate_x86/data/3500_clients_bin/" \
- --eval_data_dir="null" \
- --infer_data_dir="null" \
- --vocab_path="null" \
- --ids_path="null" \
- --path_regex="," \
- --fl_name="com.mindspore.flclient.demo.lenet.LenetClient" \
- --origin_train_model_path="federated/tests/st/simulate_x86/ms_files/lenet/lenet_train.ms" \
- --origin_infer_model_path="null" \
- --train_model_dir="ms" \
- --infer_model_dir="ms" \
- --ssl_protocol="TLSv1.2" \
- --deploy_env="x86" \
- --domain_name="http://10.*.*.*:8010" \
- --cert_path="CARoot.pem" --use_elb="false" \
- --server_num=1 \
- --task="train" \
- --thread_num=1 \
- --cpu_bind_mode="NOT_BINDING_CORE" \
- --train_weight_name="null" \
- --infer_weight_name="null" \
- --name_regex="::" \
- --server_mode="FEDERATED_LEARNING" \
- --batch_size=32 \
- --input_shape="null" \
- --client_num=8
-```
-
-Note that the related path in the startup command must give an absolute path.
-
-The above commands indicate that eight clients are started to participate in federated learning. If the startup is successful, log files corresponding to the eight clients are generated in the current folder. You can view the log files to learn the running status of each client:
-
-```text
-./
-├── client_0
-│ └── client.log # Log file of client 0.
-│ ......
-└── client_7
- └── client.log # Log file of client 7.
-```
-
-For different interfaces and scenarios, you only need to modify specific parameter values according to the meaning of the parameters, such as:
-
-- Start federated learning and training tasks: SyncFLJob.flJobRun()
-
- When `--task` in `Basic Start Command` is set to `train`, it means to start the task.
-
- You can use the command `grep -r "average loss:" client_0/client.log` to view the average loss of each epoch of `client_0` during the training process. It will be printed as follows:
-
- ```sh
- INFO: ----------epoch:0,average loss:4.1258564 ----------
- ......
- ```
-
- You can also use the command `grep -r "evaluate acc:" client_0/client.log` to view the verification accuracy of the model after the aggregation in each federated learning iteration for `client_0` . It will be printed like the following:
-
- ```sh
- INFO: [evaluate] evaluate acc: 0.125
- ......
- ```
-
- On the cloud side, the number of client group ids and algorithm type for unsupervised cluster index statistics can be specified by setting the 'cluster_client_num' parameter and 'eval_type' parameter of yaml configuration file. The 'metrics.json' statistical file generated on the cloud side can query the unsupervised indicator information:
-
- ```text
- "unsupervisedEval":0.640
- "unsupervisedEval":0.675
- "unsupervisedEval":0.677
- "unsupervisedEval":0.706
- ......
- ```
-
-- Start the inference task: SyncFLJob.modelInference()
-
- When `--task` in `Basic Start Command` is set to `inference`, it means to start the task.
-
- You can view the inference result of `client_0` through the command `grep -r "the predicted labels:" client_0/client.log`:
-
- ```sh
- INFO: [model inference] the predicted labels: [0, 0, 0, 1, 1, 1, 2, 2, 2]
- ......
- ```
-
-- Start the task of obtaining the latest model on the cloud side: SyncFLJob.getModel()
-
- When `--task` in `Basic Start Command` is set to `inference`, it means to start the task.
-
- If there is the following content in the log file, it means that the latest model on the cloud side is successfully obtained:
-
- ```sh
- INFO: [getModel] get response from server ok!
- ```
-
-### Stopping the Client Process
-
-For details, see the [finish.py](https://gitee.com/mindspore/federated/blob/master/example/cross_device_lenet_femnist/simulate_x86/finish.py) script. The details are as follows:
-
-The command of stopping the client process:
-
-```sh
-python finish.py --kill_tag=mindspore-lite-java-flclient
-```
-
-The parameter `--kill_tag` is used to search for the keyword to kill the client process. You only need to set the special keyword in `--jarPath`. The default value is `mindspore-lite-java-flclient`, that is, the name of the federated learning JAR package. The user can check whether the process still exists through the command `ps -ef |grep "mindspore-lite-java-flclient"`.
-
-Experimental results of 50 clients participating in federated learning and training tasks.
-
-Currently, the `3500_clients_bin` folder contains data of 3500 clients. This script can simulate a maximum of 3500 clients to participate in federated learning.
-
-The following figure shows the accuracy of the test dataset for federated learning on 50 clients (set `server_num` to 16).
-
-
-
-The total number of federated learning iterations is 100, the number of epochs for local training on the client is 20, and the value of batchSize is 32.
-
-The test accuracy in the figure refers to the accuracy of each client test dataset on the aggregated model on the cloud for each federated learning iteration:
-
-AVG: average accuracy of 50 clients in the test dataset for each federated learning iteration.
-
-TOP5: average accuracy of the 5 clients with the highest accuracy in the test dataset for each federated learning iteration.
-
-LOW5: average accuracy of the 5 clients with the lowest accuracy in the test dataset for each federated learning iteration.
diff --git a/docs/federated/docs/source_en/image_classification_application_in_cross_silo.md b/docs/federated/docs/source_en/image_classification_application_in_cross_silo.md
deleted file mode 100644
index 86196f4866a41b9cd97e68edc0ecdd4089b50df2..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/image_classification_application_in_cross_silo.md
+++ /dev/null
@@ -1,313 +0,0 @@
-# Implementing a Cloud-Slio Federated Image Classification Application (x86)
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/federated/docs/source_en/image_classification_application_in_cross_silo.md)
-
-Based on the type of participating clients, federated learning can be classified into cross-silo federated learning and cross-device federated learning. In a cross-silo federated learning scenario, the clients involved in federated learning are different organizations (e.g., healthcare or finance) or geographically distributed data centers, i.e., training models on multiple data silos. In the cross-device federated learning scenario, the participating clients are a large number of mobile or IoT devices. This framework will describe how to implement an image classification application by using the network LeNet on the MindSpore Federated cross-silo federated framework.
-
-The full script to launch cross-silo federated image classification application can be found [here](https://gitee.com/mindspore/federated/tree/master/example/cross_silo_femnist).
-
-## Downloading the Dataset
-
-This example uses the federated learning dataset `FEMNIST` from [leaf dataset](https://github.com/TalwalkarLab/leaf), which contains 62 different categories of handwritten numbers and letters (numbers 0 to 9, 26 lowercase letters, 26 uppercase letters) with an image size of `28 x 28` pixels. The dataset contains handwritten digits and letters from 3500 users (up to 3500 clients can be simulated to participate in federated learning). The total data volume is 805263, the average amount of data contained per user is 226.83, and the variance of the data volume for all users is 88.94.
-
-You can refer to [Image classfication dataset process](https://www.mindspore.cn/federated/docs/en/master/image_classfication_dataset_process.html) in steps 1 to 7 to obtain the 3500 user datasets `3500_client_img` in the form of images.
-
-Due to the relatively small amount of data per user in the original 3500 user dataset, it will converge too fast in the cross-silo federated task to obviously reflect the convergence effect of the cross-silo federated framework. The following provides a reference script to merge the specified number of user data into one user to increase the amount of individual user data participating in the cross-silo federated task and better simulate the cross-silo federated framework experiment.
-
-```python
-import os
-import shutil
-
-
-def mkdir(path):
- if not os.path.exists(path):
- os.mkdir(path)
-
-
-def combine_users(root_data_path, new_data_path, raw_user_num, new_user_num):
- mkdir(new_data_path)
- user_list = os.listdir(root_data_path)
- num_per_user = int(raw_user_num / new_user_num)
- for i in range(new_user_num):
- print(
- "========================== combine the raw {}~{} users to the new user: dataset_{} ==========================".format(
- i * num_per_user, i * num_per_user + num_per_user - 1, i))
- new_user = "dataset_" + str(i)
- new_user_path = os.path.join(new_data_path, new_user)
- mkdir(new_user_path)
- for j in range(num_per_user):
- index = i * new_user_num + j
- user = user_list[index]
- user_path = os.path.join(root_data_path, user)
- tags = os.listdir(user_path)
- print("------------- process the raw user: {} -------------".format(user))
- for t in tags:
- tag_path = os.path.join(user_path, t)
- label_list = os.listdir(tag_path)
- new_tag_path = os.path.join(new_user_path, t)
- mkdir(new_tag_path)
- for label in label_list:
- label_path = os.path.join(tag_path, label)
- img_list = os.listdir(label_path)
- new_label_path = os.path.join(new_tag_path, label)
- mkdir(new_label_path)
-
- for img in img_list:
- img_path = os.path.join(label_path, img)
- new_img_name = user + "_" + img
- new_img_path = os.path.join(new_label_path, new_img_name)
- shutil.copy(img_path, new_img_path)
-
-if __name__ == "__main__":
- root_data_path = "cross_silo_femnist/femnist/3500_clients_img"
- new_data_path = "cross_silo_femnist/femnist/35_7_client_img"
- raw_user_num = 35
- new_user_num = 7
- combine_users(root_data_path, new_data_path, raw_user_num, new_user_num)
-```
-
-where `root_data_path` is the path to the original 3500 user datasets, `new_data_path` is the path to the merged dataset, `raw_user_num` specifies the total number of user datasets to be merged (needs to be <= 3500), and `new_user_num` is used to set the number of users merged by the original datasets. For example, the sample code will select the first 35 users from `cross_silo_femnist/femnist/3500_clients_img`, merge them into 7 user datasets and store them in the path `cross_silo_femnist/femnist/35_7_client_img` (the merged 7 users each contains the original 5 user dataset).
-
-The following print represents a successful merge of the datasets.
-
-```sh
-========================== combine the raw 0~4 users to the new user: dataset_0 ==========================
-------------- process the raw user: f1798_42 -------------
-------------- process the raw user: f2149_81 -------------
-------------- process the raw user: f4046_46 -------------
-------------- process the raw user: f1093_13 -------------
-------------- process the raw user: f1124_24 -------------
-========================== combine the raw 5~9 users to the new user: dataset_1 ==========================
-------------- process the raw user: f0586_11 -------------
-------------- process the raw user: f0721_31 -------------
-------------- process the raw user: f3527_33 -------------
-------------- process the raw user: f0146_33 -------------
-------------- process the raw user: f1272_09 -------------
-========================== combine the raw 10~14 users to the new user: dataset_2 ==========================
-------------- process the raw user: f0245_40 -------------
-------------- process the raw user: f2363_77 -------------
-------------- process the raw user: f3596_19 -------------
-------------- process the raw user: f2418_82 -------------
-------------- process the raw user: f2288_58 -------------
-========================== combine the raw 15~19 users to the new user: dataset_3 ==========================
-------------- process the raw user: f2249_75 -------------
-------------- process the raw user: f3681_31 -------------
-------------- process the raw user: f3766_48 -------------
-------------- process the raw user: f0537_35 -------------
-------------- process the raw user: f0614_14 -------------
-========================== combine the raw 20~24 users to the new user: dataset_4 ==========================
-------------- process the raw user: f2302_58 -------------
-------------- process the raw user: f3472_19 -------------
-------------- process the raw user: f3327_11 -------------
-------------- process the raw user: f1892_07 -------------
-------------- process the raw user: f3184_11 -------------
-========================== combine the raw 25~29 users to the new user: dataset_5 ==========================
-------------- process the raw user: f1692_18 -------------
-------------- process the raw user: f1473_30 -------------
-------------- process the raw user: f0909_04 -------------
-------------- process the raw user: f1956_19 -------------
-------------- process the raw user: f1234_26 -------------
-========================== combine the raw 30~34 users to the new user: dataset_6 ==========================
-------------- process the raw user: f0031_02 -------------
-------------- process the raw user: f0300_24 -------------
-------------- process the raw user: f4064_46 -------------
-------------- process the raw user: f2439_77 -------------
-------------- process the raw user: f1717_16 -------------
-```
-
-The following directory structure of the folder `cross_silo_femnist/femnist/35_7_client_img` is as follows:
-
-```text
-35_7_client_img # Merge the 35 users in the FeMnist dataset into 7 client data (each containing 5 pieces of user data)
-├── dataset_0 # The dataset of Client 0
-│ ├── train # Training dataset
-│ │ ├── 0 # Store image data corresponding to category 0
-│ │ ├── 1 # Store image data corresponding to category 1
-│ │ │ ......
-│ │ └── 61 # Store image data corresponding to category 61
-│ └── test # Test dataset, with the same directory structure as train
-│ ......
-│
-└── dataset_6 # The dataset of Client 6
- ├── train # Training dataset
- │ ├── 0 # Store image data corresponding to category 0
- │ ├── 1 # Store image data corresponding to category 1
- │ │ ......
- │ └── 61 # Store image data corresponding to category 61
- └── test # Test dataset, with the same directory structure as train
-```
-
-## Defining the Network
-
-We choose the relatively simple LeNet network, which has seven layers without the input layer: two convolutional layers, two downsampling layers (pooling layers), and three fully connected layers. Each layer contains a different number of training parameters, as shown in the following figure:
-
-
-
-> More information about LeNet network is not described herein. For more details, please refer to .
-
-The network used for this task can be found in the script [test_cross_silo_femnist.py](https://gitee.com/mindspore/federated/blob/master/example/cross_silo_femnist/test_cross_silo_femnist.py).
-
-For a specific understanding of the network definition process in MindSpore, please refer to [quick start](https://www.mindspore.cn/tutorials/en/master/beginner/quick_start.html#building-network).
-
-## Launching the Cross-Silo Federated Task
-
-### Installing MindSpore and MindSpore Federated
-
-Both source code and downloadable distribution are included. Support CPU, GPU, Ascend hardware platforms, just choose to install according to the hardware platforms. The installation steps can be found in [MindSpore Installation Guide](https://www.mindspore.cn/install), [MindSpore Federated Installation Guide](https://www.mindspore.cn/federated/docs/en/master/federated_install.html).
-
-Currently the federated learning framework is only supported for deployment in Linux environments. Cross-silo federated learning framework requires MindSpore version number >= 1.5.0.
-
-### Launching the Task
-
-Refer to [Example](https://gitee.com/mindspore/federated/tree/master/example/cross_silo_femnist) to launch cluster. The reference example directory structure is as follows.
-
-```text
-cross_silo_femnist/
-├── config.json # Configuration file
-├── finish_cross_silo_femnist.py # Close the cross-silo federated task script
-├── run_cross_silo_femnist_sched.py # Start cross-silo federated scheduler script
-├── run_cross_silo_femnist_server.py # Start cross-silo federated server script
-├── run_cross_silo_femnist_worker.py # Start cross-silo federated worker script
-├── run_cross_silo_femnist_worker_distributed.py # Start the cloud Federation distributed training worker script
-└── test_cross_silo_femnist.py # Training scripts used by the client
-```
-
-1. Start Scheduler
-
- `run_cross_silo_femnist_sched.py` is a Python script provided for the user to start the `Scheduler` and supports modifying the configuration via argument passing `argparse`. The following command is executed, representing the `Scheduler` that starts this federated learning task with TCP port `5554`.
-
- ```sh
- python run_cross_silo_femnist_sched.py --scheduler_manage_address=127.0.0.1:5554
- ```
-
- The following print represents a successful start-up:
-
- ```sh
- [INFO] FEDERATED(35566,7f4275895740,python):2022-10-09-15:23:22.450.205 [mindspore_federated/fl_arch/ccsrc/scheduler/scheduler.cc:35] Run] Scheduler started successfully.
- [INFO] FEDERATED(35566,7f41f259d700,python):2022-10-09-15:23:22.450.357 [mindspore_federated/fl_arch/ccsrc/common/communicator/http_request_handler.cc:90] Run] Start http server!
- ```
-
-2. Start Server
-
- `run_cross_silo_femnist_server.py` is a Python script for the user to start a number of `Server`, and supports modify the configuration via argument passing `argparse`. The following command is executed, representing the `Server` that starts this federated learning task, with an http start port of `5555` and a number of `servers` of `4`.
-
- ```sh
- python run_cross_silo_femnist_server.py --local_server_num=4 --http_server_address=10.*.*.*:5555
- ```
-
- The above command is equivalent to starting four `Server` processes, each with a federated learning service port of `5555`, `5556`, `5557` and `5558`.
-
-3. Start Worker
-
- `run_cross_silo_femnist_worker.py` is a Python script for the user to start a number of `worker`, and supports modify the configuration via argument passing `argparse`. The following command is executed, representing the `worker` that starts this federated learning task, with an http start port of `5555` and a number of `worker` of `4`.
-
- ```sh
- python run_cross_silo_femnist_worker.py --dataset_path=/data_nfs/code/fed_user_doc/federated/tests/st/cross_silo_femnist/35_7_client_img/ --http_server_address=10.*.*.*:5555
- ```
-
- At present, the `worker` node of the cloud federation supports the distributed training mode of single machine multi-card and multi-machine multi-card. `run_cross_silo_femnist_worker_distributed.py` is a Python script provided for users to start the distributed training of `worker` node. It also supports the configuration modification through parameter argparse. Execute the following instructions, representing the distributed `worker` that starts this federated learning task, where `device_num` represents the number of processes started by the `worker` cluster, `run_distribute` represents the distributed training started by the cluster, and its http start port is `5555`. The number of `orker` processes is `4`:
-
- ```sh
- python run_cross_silo_femnist_worker_distributed.py --device_num=4 --run_distribute=True --dataset_path=/data_nfs/code/fed_user_doc/federated/tests/st/cross_silo_femnist/35_7_client_img/ --http_server_address=10.*.*.*:5555
- ```
-
-After executing the above three commands, go to the `worker_0` folder in the current directory and check the `worker_0` log with the command `grep -rn "test acc" *` and you will see a print similar to the following:
-
-```sh
-local epoch: 0, loss: 3.787421340711655, trian acc: 0.05342741935483871, test acc: 0.075
-```
-
-Then it means that cross-silo federated learning is started successfully and `worker_0` is training, other workers can be viewed in a similar way.
-
-If worker has been started in distributed multi-card training mode, enter the folder `worker_distributed/log_output/` in the current directory, and run the command `grep -rn "test acc" *` to view the log of `worker` distributed cluster. You can see the following print:
-
-```text
-local epoch: 0, loss: 2.3467453340711655, trian acc: 0.06532451988877687, test acc: 0.076
-```
-
-Please refer to [yaml configuration notes](https://www.mindspore.cn/federated/docs/zh-CN/master/horizontal/federated_server_yaml.html) for the description of parameter configuration in the above script.
-
-### Viewing Log
-
-After successfully starting the task, the corresponding log file will be generated under the current directory `cross_silo_femnist` with the following log file directory structure:
-
-```text
-cross_silo_femnist
-├── scheduler
-│ └── scheduler.log # Print the log during running scheduler
-├── server_0
-│ └── server.log # Print the log during running server_0
-├── server_1
-│ └── server.log # Print the log during running server_1
-├── server_2
-│ └── server.log # Print the log during running server_2
-├── server_3
-│ └── server.log # Print the log during running server_3
-├── worker_0
-│ ├── ckpt # Store the aggregated model ckpt obtained by worker_0 at the end of each federation learning iteration
-│ │ ├── 0-fl-ms-bs32-0epoch.ckpt
-│ │ ├── 0-fl-ms-bs32-1epoch.ckpt
-│ │ │
-│ │ │ ......
-│ │ │
-│ │ └── 0-fl-ms-bs32-19epoch.ckpt
-│ └── worker.log # Record the output logs when worker_0 participates in the federated learning task
-└── worker_1
- ├── ckpt # Store the aggregated model ckpt obtained by worker_1 at the end of each federation learning iteration
- │ ├── 1-fl-ms-bs32-0epoch.ckpt
- │ ├── 1-fl-ms-bs32-1epoch.ckpt
- │ │
- │ │ ......
- │ │
- │ └── 1-fl-ms-bs32-19epoch.ckpt
- └── worker.log # Record the output logs when worker_1 participates in the federated learning task
-```
-
-### Closing the Task
-
-If you want to exit in the middle, the following command is available:
-
-```sh
-python finish_cross_silo_femnist.py --redis_port=2345
-```
-
-Or wait until the training task is finished and then the cluster will exit automatically, no need to close it manually.
-
-### Results
-
-- Used data:
-
- The `35_7_client_img/` dataset generated in the `download dataset` section above
-
-- The number of client-side local training epochs: 20
-
-- The total number of cross-silo federated learning iterations: 20
-
-- Results (accuracy of the model on the client's test set after each iteration aggregation)
-
-`worker_0` result:
-
-```sh
-worker_0/worker.log:7409:local epoch: 0, loss: 3.787421340711655, trian acc: 0.05342741935483871, test acc: 0.075
-worker_0/worker.log:14419:local epoch: 1, loss: 3.725699281115686, trian acc: 0.05342741935483871, test acc: 0.075
-worker_0/worker.log:21429:local epoch: 2, loss: 3.5285709657335795, trian acc: 0.19556451612903225, test acc: 0.16875
-worker_0/worker.log:28439:local epoch: 3, loss: 3.0393165519160608, trian acc: 0.4889112903225806, test acc: 0.4875
-worker_0/worker.log:35449:local epoch: 4, loss: 2.575952764115026, trian acc: 0.6854838709677419, test acc: 0.60625
-worker_0/worker.log:42459:local epoch: 5, loss: 2.2081101375296512, trian acc: 0.7782258064516129, test acc: 0.6875
-worker_0/worker.log:49470:local epoch: 6, loss: 1.9229739431736557, trian acc: 0.8054435483870968, test acc: 0.69375
-worker_0/worker.log:56480:local epoch: 7, loss: 1.7005576549999293, trian acc: 0.8296370967741935, test acc: 0.65625
-worker_0/worker.log:63490:local epoch: 8, loss: 1.5248727620766704, trian acc: 0.8407258064516129, test acc: 0.6375
-worker_0/worker.log:70500:local epoch: 9, loss: 1.3838803705352127, trian acc: 0.8568548387096774, test acc: 0.7
-worker_0/worker.log:77510:local epoch: 10, loss: 1.265225578921041, trian acc: 0.8679435483870968, test acc: 0.7125
-worker_0/worker.log:84520:local epoch: 11, loss: 1.167484122101638, trian acc: 0.8659274193548387, test acc: 0.70625
-worker_0/worker.log:91530:local epoch: 12, loss: 1.082880981700859, trian acc: 0.8770161290322581, test acc: 0.65625
-worker_0/worker.log:98540:local epoch: 13, loss: 1.0097520119572772, trian acc: 0.8840725806451613, test acc: 0.64375
-worker_0/worker.log:105550:local epoch: 14, loss: 0.9469810053708015, trian acc: 0.9022177419354839, test acc: 0.7
-worker_0/worker.log:112560:local epoch: 15, loss: 0.8907848935604703, trian acc: 0.9022177419354839, test acc: 0.6625
-worker_0/worker.log:119570:local epoch: 16, loss: 0.8416629644123349, trian acc: 0.9082661290322581, test acc: 0.70625
-worker_0/worker.log:126580:local epoch: 17, loss: 0.798475691030866, trian acc: 0.9122983870967742, test acc: 0.70625
-worker_0/worker.log:133591:local epoch: 18, loss: 0.7599438544427897, trian acc: 0.9243951612903226, test acc: 0.6875
-worker_0/worker.log:140599:local epoch: 19, loss: 0.7250227383907605, trian acc: 0.9294354838709677, test acc: 0.7125
-```
-
-The test results of other clients are basically the same, so the details are not listed herein.
diff --git a/docs/federated/docs/source_en/images/HFL_en.png b/docs/federated/docs/source_en/images/HFL_en.png
deleted file mode 100644
index f7b5adac95b8dff2fc010fa49607c706b67daab7..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/HFL_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/VFL_en.png b/docs/federated/docs/source_en/images/VFL_en.png
deleted file mode 100644
index 818bb3de139b9ae2d499c18383d08f324e2b634d..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/VFL_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/create_android_project.png b/docs/federated/docs/source_en/images/create_android_project.png
deleted file mode 100644
index a519264c4158fba67eb1ff5f5fbc3eae65b32363..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/create_android_project.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/data_join_en.png b/docs/federated/docs/source_en/images/data_join_en.png
deleted file mode 100644
index 9cd2b73335d611e9532867f08e675054244561aa..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/data_join_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/deploy_VFL_en.png b/docs/federated/docs/source_en/images/deploy_VFL_en.png
deleted file mode 100644
index 390edb1cd1be92e8234a79b36bd10571fba092f0..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/deploy_VFL_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/download_compress_client_en.png b/docs/federated/docs/source_en/images/download_compress_client_en.png
deleted file mode 100644
index ab2b84783c1478091c2f9d600e5b051a2f4149dd..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/download_compress_client_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/download_compress_server_en.png b/docs/federated/docs/source_en/images/download_compress_server_en.png
deleted file mode 100644
index 7917007287e2a7cf84adf3ecd559ee46b49b879b..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/download_compress_server_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/label_dp_en.png b/docs/federated/docs/source_en/images/label_dp_en.png
deleted file mode 100644
index 059f9bb61fdb129697d561cf476ef6e09adf4ecb..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/label_dp_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/lenet_50_clients_acc_en.png b/docs/federated/docs/source_en/images/lenet_50_clients_acc_en.png
deleted file mode 100644
index 49d69ab1e14d37c9ae54562f6be09bddae24a570..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/lenet_50_clients_acc_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/lenet_signds_loss_auc.png b/docs/federated/docs/source_en/images/lenet_signds_loss_auc.png
deleted file mode 100644
index 7304b69c4d0abf039549dce758b906d688213e4f..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/lenet_signds_loss_auc.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/mindspore_federated_networking.png b/docs/federated/docs/source_en/images/mindspore_federated_networking.png
deleted file mode 100644
index 4340cb66b638e072ffdb11167743cc45c36a9536..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/mindspore_federated_networking.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/splitnn_pangu_alpha_en.png b/docs/federated/docs/source_en/images/splitnn_pangu_alpha_en.png
deleted file mode 100644
index 6a423ab3dbe0d43e858d48bd3f6de7ad92c3f040..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/splitnn_pangu_alpha_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/splitnn_wide_and_deep_en.png b/docs/federated/docs/source_en/images/splitnn_wide_and_deep_en.png
deleted file mode 100644
index 0a281a046fbb44c59be02a31ee0b15092cc041ec..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/splitnn_wide_and_deep_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/start_android_project.png b/docs/federated/docs/source_en/images/start_android_project.png
deleted file mode 100644
index 3a9336add10acbbef60dc429b8a3bad1ca198c38..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/start_android_project.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/upload_compress_server_en.png b/docs/federated/docs/source_en/images/upload_compress_server_en.png
deleted file mode 100644
index 02e554d3c3977c7f041e6c68fc434a794c2e8a8a..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/upload_compress_server_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/upload_compression_client_en.png b/docs/federated/docs/source_en/images/upload_compression_client_en.png
deleted file mode 100644
index 8690363d2c4e29dfbe19927c4e65d2e80bf82e79..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/upload_compression_client_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/vfl_1_en.png b/docs/federated/docs/source_en/images/vfl_1_en.png
deleted file mode 100644
index 2cefd4c529b4df0dc5f6a36c667f02fcaa608fdc..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/vfl_1_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/vfl_backward_en.png b/docs/federated/docs/source_en/images/vfl_backward_en.png
deleted file mode 100644
index 867a4698dac1cac5a4233f20635074669747e96a..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/vfl_backward_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/vfl_feature_reconstruction_defense_en.png b/docs/federated/docs/source_en/images/vfl_feature_reconstruction_defense_en.png
deleted file mode 100644
index 608017209c42d910796a461337271855eebaad80..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/vfl_feature_reconstruction_defense_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/vfl_feature_reconstruction_en.png b/docs/federated/docs/source_en/images/vfl_feature_reconstruction_en.png
deleted file mode 100644
index 985eadb43d776efac54127e17e92dafd76106321..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/vfl_feature_reconstruction_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/vfl_forward_en.png b/docs/federated/docs/source_en/images/vfl_forward_en.png
deleted file mode 100644
index 37e6fbec1ff5b7f0c85bdc4fb86e0a7957b6ca54..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/vfl_forward_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/vfl_normal_communication_compress_en.png b/docs/federated/docs/source_en/images/vfl_normal_communication_compress_en.png
deleted file mode 100644
index 38bd38616ff55da6ae4439fd8f0ca1951990eb78..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/vfl_normal_communication_compress_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/vfl_pangu_communication_compress_en.png b/docs/federated/docs/source_en/images/vfl_pangu_communication_compress_en.png
deleted file mode 100644
index a9c10552966f3f8d1247665347bbc11f72287588..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/vfl_pangu_communication_compress_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/vfl_with_tee_en.png b/docs/federated/docs/source_en/images/vfl_with_tee_en.png
deleted file mode 100644
index 78732f01117e0e5e6e1f358e94563ae3f2ce44b5..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/vfl_with_tee_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/weight_diff_decode_en.png b/docs/federated/docs/source_en/images/weight_diff_decode_en.png
deleted file mode 100644
index c0a48f84a02a828b3a7b9e4c1206502a8786bdd5..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/weight_diff_decode_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/images/weight_diff_encode_en.png b/docs/federated/docs/source_en/images/weight_diff_encode_en.png
deleted file mode 100644
index 54df35c2ac7ac2169e6f91e25bd17c797e2586a0..0000000000000000000000000000000000000000
Binary files a/docs/federated/docs/source_en/images/weight_diff_encode_en.png and /dev/null differ
diff --git a/docs/federated/docs/source_en/index.rst b/docs/federated/docs/source_en/index.rst
deleted file mode 100644
index 510c30a9b9a46c6dd24033eb7b07616e44ff20f1..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/index.rst
+++ /dev/null
@@ -1,177 +0,0 @@
-.. MindSpore documentation master file, created by
- sphinx-quickstart on Thu Mar 24 11:00:00 2020.
- You can adapt this file completely to your liking, but it should at least
- contain the root `toctree` directive.
-
-MindSpore Federated Documents
-================================
-
-MindSpore Federated is an open source federated learning tool for MindSpore, and it enables all-scenario intelligent applications when user data is stored locally.
-
-Federated learning is a cryptographically distributed machine learning technique for solving data silos and performing efficient, secure and reliable machine learning across multiple parties or multiple resource computing nodes. Support the various participants of machine learning to build AI models together without directly sharing local data, including but not limited to mainstream deep learning models such as ad recommendation, classification, and detection, mainly applied in finance, medical, recommendation and other fields.
-
-MindSpore Federated provides a horizontal federated model with sample federation and a vertical federation model with feature federation. Support commercial deployment for millions of stateless terminal devices, as well as cloud federated between data centers across trusted zones.
-
-Code repository address:
-
-Advantages of the MindSpore Federated Horizontal Framework
------------------------------------------------------------
-
-Horizontal Federated Architecture:
-
-.. raw:: html
-
-
-
-1. Privacy Protection
-
- It supports accuracy-lossless security aggregation solution based on secure multi-party computation (MPC) to prevent model theft.
-
- It supports performance-lossless encryption based on local differential privacy to prevent private data leakage from models.
-
- It supports a gradient protection scheme based on Symbolic Dimensional Selection (SignDS), which prevents model privacy data leakage while reducing communication overhead by 99%.
-
-2. Distributed Federated Aggregation
-
- The loosely coupled cluster processing mode on the cloud and distributed gradient quadratic aggregation paradigms support the deployment of tens of millions of heterogeneous devices, implements high-performance and high-availability federated aggregation computing, and can cope with network instability and sudden load changes.
-
-3. Federated Learning Efficiency Improvement
-
- The adaptive frequency modulation strategy and gradient compression algorithm are supported to improve the federated learning efficiency and saving bandwidth resources.
-
- Multiple federated aggregation policies are supported to improve the smoothness of federated learning convergence and optimize both global and local accuracies.
-
-4. Easy to Use
-
- Only one line of code is required to switch between the standalone training and federated learning modes.
-
- The network models, aggregation algorithms, and security algorithms are programmable, and the security level can be customized.
-
- It supports the effectiveness evaluation of federated training models and provides monitoring capabilities for federated tasks.
-
-Advantages of the MindSpore Federated Vertical Framework
------------------------------------------------------------
-
-Vertical Federated Architecture:
-
-.. raw:: html
-
-
-
-1. Privacy Protection
-
- Support high-performance Privacy Set Intersection Protocol (PSI), which prevents federated participants from obtaining ID information outside the intersection and can cope with data imbalance scenarios.
-
- Support feature protection software solution that combines quantization and differential privacy, to prevent attackers from reconstructing original privacy data from intermediate features.
-
- Support feature protection hardware solution which is based on trusted execution environment, to provide high-strength and efficient feature protection capabilities.
-
- Support label protection solution which is based on differential privacy, to prevent the leakage of user label data.
-
-2. Federated training
-
- Support multiple types of split learning network structures.
-
- Cross-domain training for large models with pipelined parallel optimization.
-
-MindSpore Federated Working Process
-------------------------------------
-
-1. `Scenario Identification and Data Accumulation `_
-
- Identify scenarios where federated learning is used and accumulate local data for federated tasks on the client.
-
-2. `Model Selection and Framework Deployment `_
-
- Select or develop a model prototype and use a tool to generate a federated learning model that is easy to deploy.
-
-3. `Application Deployment `_
-
- Deploy the corresponding components to the business application and set up federated configuration tasks and deployment scripts on the server.
-
-Common Application Scenarios
-----------------------------
-
-1. `Image Classification `_
-
- Use the federated learning to implement image classification applications.
-
-2. `Text Classification `_
-
- Use the federated learning to implement text classification applications.
-
-.. toctree::
- :maxdepth: 1
- :caption: Deployment
-
- federated_install
- deploy_federated_server
- deploy_federated_client
- deploy_vfl
-
-.. toctree::
- :maxdepth: 1
- :caption: Horizontal Application
-
- image_classfication_dataset_process
- image_classification_application
- sentiment_classification_application
- image_classification_application_in_cross_silo
- object_detection_application_in_cross_silo
-
-.. toctree::
- :maxdepth: 1
- :caption: Vertical Application
-
- data_join
- split_wnd_application
- split_pangu_alpha_application
-
-.. toctree::
- :maxdepth: 1
- :caption: Security and Privacy
-
- local_differential_privacy_training_noise
- local_differential_privacy_training_signds
- local_differential_privacy_eval_laplace
- pairwise_encryption_training
- private_set_intersection
- secure_vertical_federated_learning_with_EmbeddingDP
- secure_vertical_federated_learning_with_TEE
- secure_vertical_federated_learning_with_DP
-
-.. toctree::
- :maxdepth: 1
- :caption: Communication Compression
-
- communication_compression
- vfl_communication_compress
-
-.. toctree::
- :maxdepth: 1
- :caption: Horizontal Federated API Reference
-
- horizontal_server
- cross_device
- horizontal/cross_silo
-
-.. toctree::
- :maxdepth: 1
- :caption: Vertical Federated API Reference
-
- Data_Join
- vertical/vertical_communicator
- vertical_federated_trainer
-
-.. toctree::
- :maxdepth: 1
- :caption: References
-
- faq
-
-.. toctree::
- :glob:
- :maxdepth: 1
- :caption: RELEASE NOTES
-
- RELEASE
diff --git a/docs/federated/docs/source_en/interface_description_federated_client.md b/docs/federated/docs/source_en/interface_description_federated_client.md
deleted file mode 100644
index abc50a417e61d83886203b57c6b04c5cdc319ea8..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/interface_description_federated_client.md
+++ /dev/null
@@ -1,350 +0,0 @@
-# Examples
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/federated/docs/source_en/interface_description_federated_client.md)
-
-Note that before using the following interfaces, you can first refer to the document [on-device deployment](https://www.mindspore.cn/federated/docs/en/master/deploy_federated_client.html) to deploy related environments.
-
-## flJobRun() for Starting Federated Learning
-
-Before calling the flJobRun() API, instantiate the parameter class FLParameter and set related parameters as follows:
-
-| Parameter | Type | Mandatory | Description | Remarks |
-| -------------------- | ---------------------------- | --------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
-| dataMap | Map/> | Y | The path of Federated learning dataset. | The dataset of Map/> type, the key in the map is the RunType enumeration type, the value is the corresponding dataset list, when the key is RunType.TRAINMODE, the corresponding value is the training-related dataset list, when the key is RunType.EVALMODE, it means that the corresponding value is a list of verification-related datasets, and when the key is RunType.INFERMODE, it means that the corresponding value is a list of inference-related datasets. |
-| flName | String | Y | The package path of model script used by federated learning. | We provide two types of model scripts for your reference ([Supervised sentiment classification task](https://gitee.com/mindspore/federated/tree/master/example/quick_start_flclient/src/main/java/com/mindspore/flclient/demo/albert)), ([LeNet image classification task](https://gitee.com/mindspore/federated/tree/master/example/quick_start_flclient/src/main/java/com/mindspore/flclient/demo/lenet)). For supervised sentiment classification tasks, this parameter can be set to the package path of the provided script file [AlBertClient.java](https://gitee.com/mindspore/federated/blob/master/example/quick_start_flclient/src/main/java/com/mindspore/flclient/demo/albert/AlbertClient.java), like as `com.mindspore.flclient.demo.albert.AlbertClient`; for LeNet image classification tasks, this parameter can be set to the package path of the provided script file [LenetClient.java](https://gitee.com/mindspore/federated/blob/master/example/quick_start_flclient/src/main/java/com/mindspore/flclient/demo/lenet/LenetClient.java), like as `com.mindspore.flclient.demo.lenet.LenetClient`. At the same time, users can refer to these two types of model scripts, define the model script by themselves, and then set the parameter to the package path of the customized model file ModelClient.java (which needs to inherit from the class [Client.java](https://gitee.com/mindspore/federated/blob/master/mindspore_federated/device_client/src/main/java/com/mindspore/flclient/model/Client.java)). |
-| trainModelPath | String | Y | Path of a training model used for federated learning, which is an absolute path of the .ms file. | It is recommended to set the path to the training App's own directory to protect the data access security of the model itself. |
-| inferModelPath | String | Y | Path of an inference model used for federated learning, which is an absolute path of the .ms file. | For the normal federated learning mode (training and inference use the same model), the value of this parameter needs to be the same as that of `trainModelPath`; for the hybrid learning mode (training and inference use different models, and the server side also includes training process), this parameter is set to the path of actual inference model. It is recommended to set the path to the training App's own directory to protect the data access security of the model itself. |
-| sslProtocol | String | N | The TLS protocol version used by the device-cloud HTTPS communication. | A whitelist is set, and currently only "TLSv1.3" or "TLSv1.2" is supported. Only need to set it up in the HTTPS communication scenario. |
-| deployEnv | String | Y | The deployment environment for federated learning. | A whitelist is set, currently only "x86", "android" are supported. |
-| certPath | String | N | The self-signed root certificate path used for device-cloud HTTPS communication. | When the deployment environment is "x86" and the device-cloud uses a self-signed certificate for HTTPS communication authentication, this parameter needs to be set. The certificate must be consistent with the CA root certificate used to generate the cloud-side self-signed certificate to pass the verification. This parameter is used for non-Android scenarios. |
-| domainName | String | Y | The url for device-cloud communication. | Currently, https and http communication are supported, the corresponding formats are like: https://......, http://......, and when `useElb` is set to true, the format must be: https://127.0.0.0 : 6666 or http://127.0.0.0 : 6666 , where `127.0.0.0` corresponds to the ip of the machine providing cloud-side services (corresponding to the cloud-side parameter `--scheduler_ip`), and `6666` corresponds to the cloud-side parameter `--fl_server_port`. |
-| ifUseElb | boolean | N | Used for multi-server scenarios to set whether to randomly send client requests to different servers within a certain range. | Setting to true means that the client will randomly send requests to a certain range of server addresses, and false means that the client's requests will be sent to a fixed server address. This parameter is used in non-Android scenarios, and the default value is false. |
-| serverNum | int | N | The number of servers that the client can choose to connect to. | When `ifUseElb` is set to true, it can be set to be consistent with the `server_num` parameter when the server is started on the cloud side. It is used to randomly select different servers to send information. This parameter is used in non-Android scenarios. The default value is 1. |
-| ifPkiVerify | boolean | N | The switch of device-cloud identity authentication. | Set to true to enable device-cloud security authentication, set to false to disable, and the default value is false. Identity authentication requires HUKS to provide a certificate. This parameter is only used in the Android environment (currently only supports HUAWEI phones). |
-| threadNum | int | N | The number of threads used in federated learning training and inference. | The default value is 1. |
-| cpuBindMode | BindMode | N | The cpu core that threads need to bind during federated learning training and inference. | It is the enumeration type `BindMode`, where BindMode.NOT_BINDING_CORE represents the unbound core, which is automatically assigned by the system, BindMode.BIND_LARGE_CORE represents the bound large core, and BindMode.BIND_MIDDLE_CORE represents the bound middle core. The default value is BindMode.NOT_BINDING_CORE. |
-| batchSize | int | Y | The number of single-step training samples used in federated learning training and inference, that is, batch size. | It needs to be consistent with the batch size of the input data of the model. |
-| iflJobResultCallback | IFLJobResultCallback | N | The federated learning callback function object `iflJobResultCallback`. | The user can implement the specific method of the interface class [IFLJobResultCallback.java](https://gitee.com/mindspore/federated/blob/master/mindspore_federated/device_client/src/main/java/com/mindspore/flclient/IFLJobResultCallback.java) in the project according to the needs of the actual scene, and set it as a callback function object in the federated learning task. We provide a simple implementation use case [FLJobResultCallback.java](https://gitee.com/mindspore/federated/blob/master/mindspore_federated/device_client/src/main/java/com/mindspore/flclient/IFLJobResultCallback.java) as the default value of this parameter. |
-
-Note 1: When using HTTP communication, there may exist communication security risks, please be aware.
-
-Note 2: In the Android environment, the following parameters need to be set when using HTTPS communication. The setting examples are as follows:
-
-```java
-FLParameter flParameter = FLParameter.getInstance();
-SecureSSLSocketFactory sslSocketFactory = SecureSSLSocketFactory.getInstance(applicationContext)
-SecureX509TrustManager x509TrustManager = new SecureX509TrustManager(applicationContext);
-flParameter.setSslSocketFactory(sslSocketFactory);
-flParameter.setX509TrustManager(x509TrustManager);
-```
-
-Among them, the two objects `SecureSSLSocketFactory` and `SecureX509TrustManager` need to be implemented in the Android project, and users need to design by themselves according to the type of certificate in the mobile phone.
-
-Note 3: In the x86 environment, currently only self-signed certificate authentication is supported when using HTTPS communication, and the following parameters need to be set. The setting examples are as follows:
-
-```java
-FLParameter flParameter = FLParameter.getInstance();
-String certPath = "CARoot.pem"; // the self-signed root certificate path used for device-cloud HTTPS communication.
-flParameter.setCertPath(certPath);
-```
-
-Note 4: In the Android environment, when `pkiVerify` is set to true and encrypt_train_type is set to PW_ENCRYPT on the cloud side, the following parameters need to be set. The setting examples are as follows:
-
-```java
-FLParameter flParameter = FLParameter.getInstance();
-String equipCrlPath = certPath;
-long validIterInterval = 3600000;
-flParameter.setEquipCrlPath(equipCrlPath);
-flParameter.setValidInterval(validIterInterval);
-```
-
-Among them, `equipCrlPath` is the CRL certificate required for certificate verification among devices, that is, the certificate revocation list. Generally, the device certificate CRL in "Huawei CBG Certificate Revocation Lists" can be preset; `validIterInterval` which is used to help prevent replay attacks in PW_ENCRYPT mode can generally be set to the time required for each round of device-cloud aggregation (unit: milliseconds, the default value is 3600000).
-
-Note 5: Before each federated learning task is started, the FLParameter class will be instantiated for related parameter settings. When FLParameter is instantiated, a clientID is automatically generated randomly, which is used to uniquely identify the client during the interaction with the cloud side. If the user needs to set the clientID by himself, after instantiating the FLParameter class, call its setClientID method to set it, and then after starting the federated learning task, the clientID set by the user will be used.
-
-Create a SyncFLJob object and use the flJobRun() method of the SyncFLJob class to start a federated learning task.
-
-The sample code (basic http communication) is as follows:
-
-1. Sample code of a supervised sentiment classification task
-
- ```java
- // create dataMap
- String trainTxtPath = "data/albert/supervise/client/1.txt";
- String evalTxtPath = "data/albert/supervise/eval/eval.txt"; // Not necessary, if you don't need verify model accuracy after getModel, you don't need to set this parameter
- String vocabFile = "data/albert/supervise/vocab.txt"; // Path of the dictionary file for data preprocessing.
- String idsFile = "data/albert/supervise/vocab_map_ids.txt" // Path of the mapping ID file of a dictionary.
- Map> dataMap = new HashMap<>();
- List trainPath = new ArrayList<>();
- trainPath.add(trainTxtPath);
- trainPath.add(vocabFile);
- trainPath.add(idsFile);
- List evalPath = new ArrayList<>(); // Not necessary, if you don't need verify model accuracy after getModel, you don't need to set this parameter
- evalPath.add(evalTxtPath); // Not necessary, if you don't need verify model accuracy after getModel, you don't need to set this parameter
- evalPath.add(vocabFile); // Not necessary, if you don't need verify model accuracy after getModel, you don't need to set this parameter
- evalPath.add(idsFile); // Not necessary, if you don't need verify model accuracy after getModel, you don't need to set this parameter
- dataMap.put(RunType.TRAINMODE, trainPath);
- dataMap.put(RunType.EVALMODE, evalPath); // Not necessary, if you don't need verify model accuracy after getModel, you don't need to set this parameter
-
- String flName = "com.mindspore.flclient.demo.albert.AlbertClient"; // The package path of AlBertClient.java
- String trainModelPath = "ms/albert/train/albert_ad_train.mindir0.ms"; // Absolute path
- String inferModelPath = "ms/albert/train/albert_ad_train.mindir0.ms"; // Absolute path, consistent with trainModelPath
- String sslProtocol = "TLSv1.2";
- String deployEnv = "android";
- String domainName = "http://10.*.*.*:6668";
- boolean ifUseElb = true;
- int serverNum = 4;
- int threadNum = 4;
- BindMode cpuBindMode = BindMode.NOT_BINDING_CORE;
- int batchSize = 32;
-
- FLParameter flParameter = FLParameter.getInstance();
- flParameter.setFlName(flName);
- flParameter.setDataMap(dataMap);
- flParameter.setTrainModelPath(trainModelPath);
- flParameter.setInferModelPath(inferModelPath);
- flParameter.setSslProtocol(sslProtocol);
- flParameter.setDeployEnv(deployEnv);
- flParameter.setDomainName(domainName);
- flParameter.setUseElb(useElb);
- flParameter.setServerNum(serverNum);
- flParameter.setThreadNum(threadNum);
- flParameter.setCpuBindMode(BindMode.valueOf(cpuBindMode));
-
- // start FLJob
- SyncFLJob syncFLJob = new SyncFLJob();
- syncFLJob.flJobRun();
- ```
-
-2. Sample code of a LeNet image classification task
-
- ```java
- // create dataMap
- String trainImagePath = "SyncFLClient/data/3500_clients_bin/f0178_39/f0178_39_bn_9_train_data.bin";
- String trainLabelPath = "SyncFLClient/data/3500_clients_bin/f0178_39/f0178_39_bn_9_train_label.bin";
- String evalImagePath = "SyncFLClient/data/3500_clients_bin/f0178_39/f0178_39_bn_1_test_data.bin"; // Not necessary, if you don't need verify model accuracy after getModel, you don't need to set this parameter
- String evalLabelPath = "SyncFLClient/data/3500_clients_bin/f0178_39/f0178_39_bn_1_test_label.bin"; // Not necessary, if you don't need verify model accuracy after getModel, you don't need to set this parameter
- Map> dataMap = new HashMap<>();
- List trainPath = new ArrayList<>();
- trainPath.add(trainImagePath);
- trainPath.add(trainLabelPath);
- List evalPath = new ArrayList<>(); // Not necessary, if you don't need verify model accuracy after getModel, you don't need to set this parameter
- evalPath.add(evalImagePath); // Not necessary, if you don't need verify model accuracy after getModel, you don't need to set this parameter
- evalPath.add(evalLabelPath); // Not necessary, if you don't need verify model accuracy after getModel, you don't need to set this parameter
- dataMap.put(RunType.TRAINMODE, trainPath);
- dataMap.put(RunType.EVALMODE, evalPath); // Not necessary, if you don't need verify model accuracy after getModel, you don't need to set this parameter
-
- String flName = "com.mindspore.flclient.demo.lenet.LenetClient"; // The package path of LenetClient.java
- String trainModelPath = "SyncFLClient/lenet_train.mindir0.ms"; // Absolute path
- String inferModelPath = "SyncFLClient/lenet_train.mindir0.ms"; // Absolute path, consistent with trainModelPath
- String sslProtocol = "TLSv1.2";
- String deployEnv = "android";
- String domainName = "http://10.*.*.*:6668";
- boolean ifUseElb = true;
- int serverNum = 4;
- int threadNum = 4;
- BindMode cpuBindMode = BindMode.NOT_BINDING_CORE;
- int batchSize = 32;
-
- FLParameter flParameter = FLParameter.getInstance();
- flParameter.setFlName(flName);
- flParameter.setDataMap(dataMap);
- flParameter.setTrainModelPath(trainModelPath);
- flParameter.setInferModelPath(inferModelPath);
- flParameter.setSslProtocol(sslProtocol);
- flParameter.setDeployEnv(deployEnv);
- flParameter.setDomainName(domainName);
- flParameter.setUseElb(useElb);
- flParameter.setServerNum(serverNum);
- flParameter.setThreadNum(threadNum);
- flParameter.setCpuBindMode(BindMode.valueOf(cpuBindMode));
- flParameter.setBatchSize(batchSize);
-
- // start FLJob
- SyncFLJob syncFLJob = new SyncFLJob();
- syncFLJob.flJobRun();
- ```
-
-## modelInference() for Inferring Multiple Input Data Records
-
-Before calling the modelInference() API, instantiate the parameter class FLParameter and set related parameters as follows:
-
-| Parameter | Type | Mandatory | Description | Remarks |
-| -------------- | ---------------------------- | --------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
-| flName | String | Y | The package path of model script used by federated learning. | We provide two types of model scripts for your reference ([Supervised sentiment classification task](https://gitee.com/mindspore/federated/tree/master/example/quick_start_flclient/src/main/java/com/mindspore/flclient/demo/albert), [LeNet image classification task](https://gitee.com/mindspore/federated/tree/master/example/quick_start_flclient/src/main/java/com/mindspore/flclient/demo/lenet)). For supervised sentiment classification tasks, this parameter can be set to the package path of the provided script file [AlBertClient.java](https://gitee.com/mindspore/federated/blob/master/example/quick_start_flclient/src/main/java/com/mindspore/flclient/demo/albert/AlbertClient.java), like as `com.mindspore.flclient.demo.albert.AlbertClient`; for LeNet image classification tasks, this parameter can be set to the package path of the provided script file [LenetClient.java](https://gitee.com/mindspore/federated/blob/master/example/quick_start_flclient/src/main/java/com/mindspore/flclient/demo/lenet/LenetClient.java), like as `com.mindspore.flclient.demo.lenet.LenetClient`. At the same time, users can refer to these two types of model scripts, define the model script by themselves, and then set the parameter to the package path of the customized model file ModelClient.java (which needs to inherit from the class [Client.java](https://gitee.com/mindspore/federated/blob/master/mindspore_federated/device_client/src/main/java/com/mindspore/flclient/model/Client.java)). |
-| dataMap | Map/> | Y | The path of Federated learning dataset. | The dataset of Map/> type, the key in the map is the RunType enumeration type, the value is the corresponding dataset list, when the key is RunType.TRAINMODE, the corresponding value is the training-related dataset list, when the key is RunType.EVALMODE, it means that the corresponding value is a list of verification-related datasets, and when the key is RunType.INFERMODE, it means that the corresponding value is a list of inference-related datasets. |
-| inferModelPath | String | Y | Path of an inference model used for federated learning, which is an absolute path of the .ms file. | For the normal federated learning mode (training and inference use the same model), the value of this parameter needs to be the same as that of `trainModelPath`; for the hybrid learning mode (training and inference use different models, and the server side also includes training process), this parameter is set to the path of actual inference model. It is recommended to set the path to the training App's own directory to protect the data access security of the model itself. |
-| threadNum | int | N | The number of threads used in federated learning training and inference. | The default value is 1. |
-| cpuBindMode | BindMode | N | The cpu core that threads need to bind during federated learning training and inference. | It is the enumeration type `BindMode`, where BindMode.NOT_BINDING_CORE represents the unbound core, which is automatically assigned by the system, BindMode.BIND_LARGE_CORE represents the bound large core, and BindMode.BIND_MIDDLE_CORE represents the bound middle core. The default value is BindMode.NOT_BINDING_CORE. |
-| batchSize | int | Y | The number of single-step training samples used in federated learning training and inference, that is, batch size. | It needs to be consistent with the batch size of the input data of the model. |
-
-Create a SyncFLJob object and use the modelInference() method of the SyncFLJob class to start an inference task on the device. The inferred label array is returned.
-
-The sample code is as follows:
-
-1. Sample code of a supervised sentiment classification task
-
- ```java
- // create dataMap
- String inferTxtPath = "data/albert/supervise/eval/eval.txt";
- String vocabFile = "data/albert/supervise/vocab.txt";
- String idsFile = "data/albert/supervise/vocab_map_ids.txt"
- Map> dataMap = new HashMap<>();
- List inferPath = new ArrayList<>();
- inferPath.add(inferTxtPath);
- inferPath.add(vocabFile);
- inferPath.add(idsFile);
- dataMap.put(RunType.INFERMODE, inferPath);
-
- String flName = "com.mindspore.flclient.demo.albert.AlbertClient"; // The package path of AlBertClient.java
- String inferModelPath = "ms/albert/train/albert_ad_train.mindir0.ms"; // Absolute path, consistent with trainModelPath
- int threadNum = 4;
- BindMode cpuBindMode = BindMode.NOT_BINDING_CORE;
- int batchSize = 32;
-
- FLParameter flParameter = FLParameter.getInstance();
- flParameter.setFlName(flName);
- flParameter.setDataMap(dataMap);
- flParameter.setInferModelPath(inferModelPath);
- flParameter.setThreadNum(threadNum);
- flParameter.setCpuBindMode(BindMode.valueOf(cpuBindMode));
- flParameter.setBatchSize(batchSize);
-
- // inference
- SyncFLJob syncFLJob = new SyncFLJob();
- int[] labels = syncFLJob.modelInference();
- ```
-
-2. Sample code of a LeNet image classification
-
- ```java
- // create dataMap
- String inferImagePath = "SyncFLClient/data/3500_clients_bin/f0178_39/f0178_39_bn_1_test_data.bin";
- String inferLabelPath = "SyncFLClient/data/3500_clients_bin/f0178_39/f0178_39_bn_1_test_label.bin";
- Map> dataMap = new HashMap<>();
- List inferPath = new ArrayList<>();
- inferPath.add(inferImagePath);
- inferPath.add(inferLabelPath);
- dataMap.put(RunType.INFERMODE, inferPath);
-
- String flName = "com.mindspore.flclient.demo.lenet.LenetClient"; // The package path of LenetClient.java package
- String inferModelPath = "SyncFLClient/lenet_train.mindir0.ms"; // Absolute path, consistent with trainModelPath
- int threadNum = 4;
- BindMode cpuBindMode = BindMode.NOT_BINDING_CORE;
- int batchSize = 32;
-
- FLParameter flParameter = FLParameter.getInstance();
- flParameter.setFlName(flName);
- flParameter.setDataMap(dataMap);
- flParameter.setInferModelPath(inferModelPath);
- flParameter.setThreadNum(threadNum);
- flParameter.setCpuBindMode(BindMode.valueOf(cpuBindMode));
- flParameter.setBatchSize(batchSize);
-
- // inference
- SyncFLJob syncFLJob = new SyncFLJob();
- int[] labels = syncFLJob.modelInference();
- ```
-
-## getModel() for Obtaining the Latest Model on the Cloud
-
-Before calling the getModel() API, instantiate the parameter class FLParameter and set related parameters as follows:
-
-| Parameter | Type | Mandatory | Description | Remarks |
-| -------------- | --------- | --------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
-| flName | String | Y | The package path of model script used by federated learning. | We provide two types of model scripts for your reference ([Supervised sentiment classification task](https://gitee.com/mindspore/federated/tree/master/example/quick_start_flclient/src/main/java/com/mindspore/flclient/demo/albert), [LeNet image classification task](https://gitee.com/mindspore/federated/tree/master/example/quick_start_flclient/src/main/java/com/mindspore/flclient/demo/lenet)). For supervised sentiment classification tasks, this parameter can be set to the package path of the provided script file [AlBertClient.java](https://gitee.com/mindspore/federated/blob/master/example/quick_start_flclient/src/main/java/com/mindspore/flclient/demo/albert/AlbertClient.java), like as `com.mindspore.flclient.demo.albert.AlbertClient`; for LeNet image classification tasks, this parameter can be set to the package path of the provided script file [LenetClient.java](https://gitee.com/mindspore/federated/blob/master/example/quick_start_flclient/src/main/java/com/mindspore/flclient/demo/lenet/LenetClient.java), like as `com.mindspore.flclient.demo.lenet.LenetClient`. At the same time, users can refer to these two types of model scripts, define the model script by themselves, and then set the parameter to the package path of the customized model file ModelClient.java (which needs to inherit from the class [Client.java](https://gitee.com/mindspore/federated/blob/master/mindspore_federated/device_client/src/main/java/com/mindspore/flclient/model/Client.java)). |
-| trainModelPath | String | Y | Path of a training model used for federated learning, which is an absolute path of the .ms file. | It is recommended to set the path to the training App's own directory to protect the data access security of the model itself. |
-| inferModelPath | String | Y | Path of an inference model used for federated learning, which is an absolute path of the .ms file. | For the normal federated learning mode (training and inference use the same model), the value of this parameter needs to be the same as that of `trainModelPath`; for the hybrid learning mode (training and inference use different models, and the server side also includes training process), this parameter is set to the path of actual inference model. It is recommended to set the path to the training App's own directory to protect the data access security of the model itself. |
-| sslProtocol | String | N | The TLS protocol version used by the device-cloud HTTPS communication. | A whitelist is set, and currently only "TLSv1.3" or "TLSv1.2" is supported. Only need to set it up in the HTTPS communication scenario. |
-| deployEnv | String | Y | The deployment environment for federated learning. | A whitelist is set, currently only "x86", "android" are supported. |
-| certPath | String | N | The self-signed root certificate path used for device-cloud HTTPS communication. | When the deployment environment is "x86" and the device-cloud uses a self-signed certificate for HTTPS communication authentication, this parameter needs to be set. The certificate must be consistent with the CA root certificate used to generate the cloud-side self-signed certificate to pass the verification. This parameter is used for non-Android scenarios. |
-| domainName | String | Y | The url for device-cloud communication. | Currently, https and http communication are supported, the corresponding formats are like: https://......, http://......, and when `useElb` is set to true, the format must be: https://127.0.0.0 : 6666 or http://127.0.0.0 : 6666 , where `127.0.0.0` corresponds to the ip of the machine providing cloud-side services (corresponding to the cloud-side parameter `--scheduler_ip`), and `6666` corresponds to the cloud-side parameter `--fl_server_port`. |
-| ifUseElb | boolean | N | Used for multi-server scenarios to set whether to randomly send client requests to different servers within a certain range. | Setting to true means that the client will randomly send requests to a certain range of server addresses, and false means that the client's requests will be sent to a fixed server address. This parameter is used in non-Android scenarios, and the default value is false. |
-| serverNum | int | N | The number of servers that the client can choose to connect to. | When `ifUseElb` is set to true, it can be set to be consistent with the `server_num` parameter when the server is started on the cloud side. It is used to randomly select different servers to send information. This parameter is used in non-Android scenarios. The default value is 1. |
-| serverMod | ServerMod | Y | The federated learning training mode. | The federated learning training mode of ServerMod enumeration type, where ServerMod.FEDERATED_LEARNING represents the normal federated learning mode (training and inference use the same model) ServerMod.HYBRID_TRAINING represents the hybrid learning mode (training and inference use different models, and the server side also includes training process). |
-
-Note 1: When using HTTP communication, there may exist communication security risks, please be aware.
-
-Note 2: In the Android environment, the following parameters need to be set when using HTTPS communication. The setting examples are as follows:
-
-```java
-FLParameter flParameter = FLParameter.getInstance();
-SecureSSLSocketFactory sslSocketFactory = SecureSSLSocketFactory.getInstance(applicationContext)
-SecureX509TrustManager x509TrustManager = new SecureX509TrustManager(applicationContext);
-flParameter.setSslSocketFactory(sslSocketFactory);
-flParameter.setX509TrustManager(x509TrustManager);
-```
-
-Among them, the two objects `SecureSSLSocketFactory` and `SecureX509TrustManager` need to be implemented in the Android project, and users need to design themselves according to the type of certificate in the mobile phone.
-
-Note 3: In the x86 environment, currently only self-signed certificate authentication is supported when using HTTPS communication, and the following parameters need to be set. The setting examples are as follows:
-
-```java
-FLParameter flParameter = FLParameter.getInstance();
-String certPath = "CARoot.pem"; // the self-signed root certificate path used for device-cloud HTTPS communication.
-flParameter.setCertPath(certPath);
-```
-
-Note 4: Before calling the getModel method, the FLParameter class will be instantiated for related parameter settings. When FLParameter is instantiated, a clientID is automatically generated randomly, which is used to uniquely identify the client during the interaction with the cloud side. If the user needs to set the clientID by himself, after instantiating the FLParameter class, call its setCertPath method to set it, and then after starting the getModel task, the clientID set by the user will be used.
-
-Create a SyncFLJob object and use the getModel() method of the SyncFLJob class to start an asynchronous inference task. The status code of the getModel request is returned.
-
-The sample code is as follows:
-
-1. Supervised sentiment classification task
-
- ```java
- String flName = "com.mindspore.flclient.demo.albert.AlbertClient"; // The package path of AlBertClient.java package
- String trainModelPath = "ms/albert/train/albert_ad_train.mindir0.ms"; // Absolute path
- String inferModelPath = "ms/albert/train/albert_ad_train.mindir0.ms"; // Absolute path, consistent with trainModelPath
- String sslProtocol = "TLSv1.2";
- String deployEnv = "android";
- String domainName = "http://10.*.*.*:6668";
- boolean ifUseElb = true;
- int serverNum = 4;
- ServerMod serverMod = ServerMod.FEDERATED_LEARNING;
-
- FLParameter flParameter = FLParameter.getInstance();
- flParameter.setFlName(flName);
- flParameter.setTrainModelPath(trainModelPath);
- flParameter.setInferModelPath(inferModelPath);
- flParameter.setSslProtocol(sslProtocol);
- flParameter.setDeployEnv(deployEnv);
- flParameter.setDomainName(domainName);
- flParameter.setUseElb(useElb);
- flParameter.setServerNum(serverNum);
- flParameter.setServerMod(ServerMod.valueOf(serverMod));
-
- // getModel
- SyncFLJob syncFLJob = new SyncFLJob();
- syncFLJob.getModel();
- ```
-
-2. LeNet image classification task
-
- ```java
- String flName = "com.mindspore.flclient.demo.lenet.LenetClient"; // The package path of LenetClient.java package
- String trainModelPath = "SyncFLClient/lenet_train.mindir0.ms"; // Absolute path
- String inferModelPath = "SyncFLClient/lenet_train.mindir0.ms"; // Absolute path, consistent with trainModelPath
- String sslProtocol = "TLSv1.2";
- String deployEnv = "android";
- String domainName = "http://10.*.*.*:6668";
- boolean ifUseElb = true;
- int serverNum = 4
- ServerMod serverMod = ServerMod.FEDERATED_LEARNING;
-
- FLParameter flParameter = FLParameter.getInstance();
- flParameter.setFlName(flName);
- flParameter.setTrainModelPath(trainModelPath);
- flParameter.setInferModelPath(inferModelPath);
- flParameter.setSslProtocol(sslProtocol);
- flParameter.setDeployEnv(deployEnv);
- flParameter.setDomainName(domainName);
- flParameter.setUseElb(useElb);
- flParameter.setServerNum(serverNum);
- flParameter.setServerMod(ServerMod.valueOf(serverMod));
-
- // getModel
- SyncFLJob syncFLJob = new SyncFLJob();
- syncFLJob.getModel();
- ```
diff --git a/docs/federated/docs/source_en/java_api_callback.md b/docs/federated/docs/source_en/java_api_callback.md
deleted file mode 100644
index b1fc6d1a83771313bc8f678d7b623b5f2969097f..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/java_api_callback.md
+++ /dev/null
@@ -1,66 +0,0 @@
-# Callback
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/federated/docs/source_en/java_api_callback.md)
-
-```java
-import com.mindspore.flclient.model.Callback
-```
-
-Callback defines the hook function used to record training, evaluate and predict the results of different stages in end-to-side federated learning.
-
-## Public Member Functions
-
-| function |
-| -------------------------------- |
-| [abstract Status stepBegin()](#stepbegin) |
-| [abstract Status stepEnd()](#stepend) |
-| [abstract Status epochBegin()](#epochbegin) |
-| [abstract Status epochEnd()](#epochend) |
-
-## stepBegin
-
-```java
- public abstract Status stepBegin()
-```
-
-Execute step begin function.
-
-- Returns
-
- Whether the execution is successful.
-
-## stepEnd
-
-```java
-public abstract Status stepEnd()
-```
-
-Execute step end function.
-
-- Returns
-
- Whether the execution is successful.
-
-## epochBegin
-
-```java
-public abstract Status epochBegin()
-```
-
-Execute epoch begin function.
-
-- Returns
-
- Whether the execution is successful.
-
-## epochEnd
-
-```java
-public abstract Status epochEnd()
-```
-
-Execute epoch end function.
-
-- Returns
-
- Whether the execution is successful.
diff --git a/docs/federated/docs/source_en/java_api_client.md b/docs/federated/docs/source_en/java_api_client.md
deleted file mode 100644
index 92816018eafe3c27f6b0e72cdb75080b8be53c83..0000000000000000000000000000000000000000
--- a/docs/federated/docs/source_en/java_api_client.md
+++ /dev/null
@@ -1,173 +0,0 @@
-# Client
-
-[](https://gitee.com/mindspore/docs/blob/master/docs/federated/docs/source_en/java_api_client.md)
-
-```java
-import com.mindspore.flclient.model.Client
-```
-
-Client defines the execution process object of the end-side federated learning algorithm.
-
-## Public Member Functions
-
-| function |
-| -------------------------------- |
-| [abstract List initCallbacks(RunType runType, DataSet dataSet)](#initcallbacks) |
-| [abstract Map initDataSets(Map\> files)](#initdatasets) |
-| [abstract float getEvalAccuracy(List evalCallbacks)](#getevalaccuracy) |
-| [abstract List