diff --git a/docs/devtoolkit/docs/Makefile b/docs/devtoolkit/docs/Makefile deleted file mode 100644 index 1eff8952707bdfa503c8d60c1e9a903053170ba2..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/Makefile +++ /dev/null @@ -1,20 +0,0 @@ -# Minimal makefile for Sphinx documentation -# - -# You can set these variables from the command line, and also -# from the environment for the first two. -SPHINXOPTS ?= -SPHINXBUILD ?= sphinx-build -SOURCEDIR = source_zh_cn -BUILDDIR = build_zh_cn - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/docs/devtoolkit/docs/requirements.txt b/docs/devtoolkit/docs/requirements.txt deleted file mode 100644 index a1b6a69f6dbd9c6f78710f56889e14f0e85b27f4..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/requirements.txt +++ /dev/null @@ -1,7 +0,0 @@ -sphinx == 4.4.0 -docutils == 0.17.1 -myst-parser == 0.18.1 -sphinx_rtd_theme == 1.0.0 -numpy -IPython -jieba diff --git a/docs/devtoolkit/docs/source_en/PyCharm_change_version.md b/docs/devtoolkit/docs/source_en/PyCharm_change_version.md deleted file mode 100644 index 0098ecec9f755781bbef0a1500bf5e11409d9efc..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_en/PyCharm_change_version.md +++ /dev/null @@ -1,38 +0,0 @@ -# API Mapping - API Version Switching - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/PyCharm_change_version.md) - -## Overview - -API mapping refers to the mapping relationship between PyTorch API and MindSpore API. -In MindSpore Dev Toolkit, it provides two functions: API mapping search and API mapping scan, and users can freely switch the version of API mapping data. - -## API Mapping Data Version Switching - -1. When the plug-in starts, it defaults to the same API mapping data version as the current version of the plug-in. The API mapping data version is shown in the lower right. This version number only affects the API mapping functionality of this section and does not change the version of MindSpore in the environment. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image137.jpg) - -2. Click the API mapping data version to bring up the selection list. You can choose to switch to other version by clicking on the preset version, or you can choose "other version" to try to switch by inputting other version number. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image138.jpg) - -3. Click on any version number to start switching versions. An animation below indicates the switching status. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image139.jpg) - -4. If you want to customize the version number, select "other version" in the selection list, enter the version number in the popup box, and click ok to start switching versions. Note: Please input the version number in 2.1 or 2.1.0 format, otherwise there will be no response when you click ok. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image140.jpg) - -5. If the switch is successful, the lower right status bar displays the API mapping data version information after the switch. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image141.jpg) - -6. If the switch fails, the lower right status bar shows the API mapping data version information before the switch. If the switch fails due to non-existent version number or network error, please check and try again. If you want to see the latest documentation, you can switch to the master version. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image142.jpg) - -7. When a customized version number is successfully switched, this version number is added to the list of versions to display. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image143.jpg) \ No newline at end of file diff --git a/docs/devtoolkit/docs/source_en/PyCharm_plugin_install.md b/docs/devtoolkit/docs/source_en/PyCharm_plugin_install.md deleted file mode 100644 index 2850c6f4bd53ee005b6be78f15ccb3d7719a0dc3..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_en/PyCharm_plugin_install.md +++ /dev/null @@ -1,13 +0,0 @@ -# PyCharm Plug-in Installation - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/PyCharm_plugin_install.md) - -## Installation Steps - -1. Obtain [Plug-in Zip package](https://ms-release.obs.cn-north-4.myhuaweicloud.com/2.1.0/IdePlugin/any/MindSpore_Dev_ToolKit-2.1.0.zip). -2. Start Pycharm and click on the upper left menu bar, select File->Settings->Plugins->Install Plugin from Disk. - As shown in the figure: - - ![image-20211223175637989](./images/clip_image050.jpg) - -3. Select the plug-in zip package. \ No newline at end of file diff --git a/docs/devtoolkit/docs/source_en/VSCode_api_scan.md b/docs/devtoolkit/docs/source_en/VSCode_api_scan.md deleted file mode 100644 index f4c5940a574d63163fc6481b7e32943d30def042..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_en/VSCode_api_scan.md +++ /dev/null @@ -1,49 +0,0 @@ -# API Mapping - API Sacnning - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/VSCode_api_scan.md) - -## Functions Introduction - -* Quickly scan for APIs that appear in your code and display API details directly in the sidebar. -* For the convenience of users of other machine learning frameworks, the corresponding MindSpore APIs are matched by association by scanning the mainstream framework APIs that appear in the code. -* The data version of API mapping supports switching. Please refer to the section [API Mapping - Version Switching](https://www.mindspore.cn/devtoolkit/docs/en/master/VSCode_change_version.html) for details. - -## File-level API Mapping Scanning - -1. Right-click anywhere in the current file to open the menu and select "Scan Local Files". - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image116.jpg) - -2. The right-hand column will populate with the scanned operators in the current file, including three scanning result list "PyTorch APIs that can be transformed", "Probably the result of torch.Tensor API" and "PyTorch API that does not provide a direct mapping relationship at this time". - - where - - * "PyTorch APIs that can be transformed" means PyTorch APIs used in the Documentation can be converted to MindSpore APIs. - * "Probably the result of torch.Tensor API" means APIs with the same name as torch.Tensor, which may be torch.Tensor APIs and can be converted to MindSpore APIs. - * "PyTorch API that does not provide a direct mapping relationship at this time" means APIs that are PyTorch APIs or possibly torch.Tensor APIs, but don't directly correspond to MindSpore APIs. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image117.jpg) - -## Project-level API Mapping Scanning - -1. Click the MindSpore API Mapping Scan icon on the left sidebar of Visual Studio Code. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image118.jpg) - -2. Generate a project tree view of the current IDE project containing only Python files on the left sidebar. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image119.jpg) - -3. If you select a single Python file in the view, you can get a list of operator scan results for that file. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image120.jpg) - -4. If you select a file directory in the view, you can get a list of operator scan results for all Python files in that directory. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image121.jpg) - -5. The blue font parts are all clickable and will automatically open the page in the user-default browser. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image122.jpg) - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image123.jpg) diff --git a/docs/devtoolkit/docs/source_en/VSCode_api_search.md b/docs/devtoolkit/docs/source_en/VSCode_api_search.md deleted file mode 100644 index 83b4fb4bcc26ce8d07de01074930a7efca4a4b38..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_en/VSCode_api_search.md +++ /dev/null @@ -1,29 +0,0 @@ -# API Mapping - API Search - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/VSCode_api_search.md) - -## Function Introduction - -* Quickly search for MindSpore APIs and display API details directly in the sidebar. -* For the convenience of users of other machine learning frameworks, the association matches the corresponding MindSpore API by searching for other mainstream framework APIs. -* The data version of API mapping supports switching. Please refer to the section [API Mapping - Version Switching](https://www.mindspore.cn/devtoolkit/docs/en/master/VSCode_change_version.html) for details. - -## Usage Steps - -1. Click the MindSpore API Mapping Search icon on the left sidebar of Visual Studio Code. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image124.jpg) - -2. An input box is generated in the left sidebar. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image125.jpg) - -3. Enter any word in the input box, the search results for the current keyword will be displayed below, and the search results are updated in real time according to the input content. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image126.jpg) - -4. Click on any search result and open the page in the user default browser. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image127.jpg) - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image128.jpg) \ No newline at end of file diff --git a/docs/devtoolkit/docs/source_en/VSCode_change_version.md b/docs/devtoolkit/docs/source_en/VSCode_change_version.md deleted file mode 100644 index 6469fa1f78a21af96e7f0fff45f7412af03fca45..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_en/VSCode_change_version.md +++ /dev/null @@ -1,39 +0,0 @@ -# API Mapping - Version Switching - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/VSCode_change_version.md) - -## Overview - -API mapping refers to the mapping relationship between PyTorch API and MindSpore API. In MindSpore Dev Toolkit, it provides two functions: API mapping search and API mapping scan, and users can freely switch the version of API mapping data. - -## API Mapping Data Version Switching - -1. Different versions of API mapping data will result in different API mapping scans and API mapping search results, but will not affect the version of MindSpore in the environment. The default version is the same as the plugin version, and the version information is displayed in the bottom left status bar. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image129.jpg) - -2. Clicking on this status bar will bring up a drop-down box at the top of the page containing options for the default version numbers that can be switched. Users can click on any version number to switch versions, or click on the "Customize Input" option and enter another version number in the pop-up input box to switch versions. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image130.jpg) - -3. Click on any version number to start switching versions, and the status bar in the lower left indicates the status of version switching. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image131.jpg) - -4. If you want to customize the version number, click the "Customize Input" option in the drop-down box, and the drop-down box will be changed to an input box, enter the version number according to the format of 2.1 or 2.1.0, and then press the Enter key to start switching the version, and the status bar in the lower-left corner will indicate the status of the switching. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image132.jpg) - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image133.jpg) - -5. If the switch is successful, the message in the lower right indicates that the switch is successful, and the status bar in the lower left displays information about the version of the API mapping data after the switch. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image134.jpg) - -6. If the switch fails, the message in the lower right indicates that the switch fails, and the status bar in the lower left shows the API mapping data version information before the switch. If the switch fails due to non-existent version number or network error, please check and try again. If you want to see the latest documentation, you can switch to the master version. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image135.jpg) - -7. When the customized version number is switched successfully, this version number is added to the drop-down box for display. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image136.jpg) \ No newline at end of file diff --git a/docs/devtoolkit/docs/source_en/VSCode_plugin_install.md b/docs/devtoolkit/docs/source_en/VSCode_plugin_install.md deleted file mode 100644 index 7f192ae88ac085657900b582550961f6592344eb..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_en/VSCode_plugin_install.md +++ /dev/null @@ -1,18 +0,0 @@ -# Visual Studio Code Plug-in Installation - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/VSCode_plugin_install.md) - -## Installation Steps - -1. Obtain [Plug-in vsix package](https://ms-release.obs.cn-north-4.myhuaweicloud.com/2.1.0/IdePlugin/any/mindspore-dev-toolkit-2.1.0.vsix). -2. Click the fifth button on the left, "Extensions". Click the three dots in the upper right corner, and then click "Install from VSIX..." - - ![image-20211223175637989](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image112.jpg) - -3. Select the downloaded vsix file from the folder and the plug-in will be installed automatically. When there says "Completed installing MindSpore Dev Toolkit extension from VSIX" in the bottom right corner, the plug-in is successfully installed. - - ![image-20211223175637989](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image113.jpg) - -4. Click the refresh button in the left column, and you can see the "MindSpore Dev Toolkit" plug-in in the "INSTALLED" directory. In this way, the plug-in is successfully installed. - - ![image-20211223175637989](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image114.jpg) diff --git a/docs/devtoolkit/docs/source_en/VSCode_smart_completion.md b/docs/devtoolkit/docs/source_en/VSCode_smart_completion.md deleted file mode 100644 index ad4f7ec61e109e178c86d6a25a3d74509054369d..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_en/VSCode_smart_completion.md +++ /dev/null @@ -1,22 +0,0 @@ -# Code Completion - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/VSCode_smart_completion.md) - -## Functions Description - -* Provide AI code completion based on the MindSpore project. -* Easily develop MindSpore without installing MindSpore environment. - -## Usage Steps - -1. When you install or use the plug-in for the first time, the model will be downloaded automatically. "Start Downloading Model" will appear in the lower right corner, and "Download Model Successfully" indicates that the model is downloaded and started successfully. If the Internet speed is slow, it will take more than ten minutes to download the model. The message "Downloaded Model Successfully" will appear only after the download is complete. If this is not the first time you use it, the message will not appear. - - ![image-20211223175637989](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image115.jpg) - -2. Open the Python file to write the code. - - ![image-20211223175637989](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image097.jpg) - -3. The completion will take effect automatically when coding. The code with the MindSpore Dev Toolkit suffix name is the code provided by plug-in smart completion. - - ![image-20211223175637989](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image111.jpg) diff --git a/docs/devtoolkit/docs/source_en/api_scanning.md b/docs/devtoolkit/docs/source_en/api_scanning.md deleted file mode 100644 index 687c6f09bc1a5ac8a3f734bc2391e5a7e2ee46a6..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_en/api_scanning.md +++ /dev/null @@ -1,62 +0,0 @@ -# API Mapping - API Scanning - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/api_scanning.md) - -## Functions Introduction - -* Quickly scan the APIs in the code and display the API details directly in the sidebar. -* For the convenience of other machine learning framework users, by scanning the mainstream framework APIs that appear in the code, associative matching the corresponding MindSpore API. -* The data version of API mapping supports switching, and please refer to the section [API Mapping - Version Switching](https://www.mindspore.cn/devtoolkit/docs/en/master/PyCharm_change_version.html) for details. - -## Usage Steps - -### Document-level API Scanning - -1. Right click anywhere in the current file to open the menu, and click "API scan" at the top of the menu. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image100.jpg) - -2. The right sidebar will automatically pop up to show the scanned operator and display a detailed list containing the name, URL and other information. If no operator is scanned in this document, no pop-up window will appear. - - where: - - * "PyTorch/TensorFlow APIs that can be converted to MindSpore APIs" means PyTorch or TensorFlow APIs used in the Documentation that can be converted to MindSpore APIs. - * "APIs that cannot be converted at this time" means APIs that are PyTorch or TensorFlow APIs but do not have a direct equivalent to MindSpore APIs. - * "Possible PyTorch/TensorFlow API" refers to a convertible case where there is a possible PyTorch or TensorFlow API because of chained calls. - * TensorFlow API scanning is an experimental feature. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image101.jpg) - -3. Click the blue words, and another column will automatically open at the top to show the page. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image102.jpg) - -4. Click the "export" button in the upper right corner to export the content to a csv table. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image103.jpg) - -### Project-level API Scanning - -1. Right-click anywhere on the current file to open the menu, click the second option "API scan project-level" at the top of the menu, or select "Tools" in the toolbar above, and then select "API scan project-level". - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image104.jpg) - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image105.jpg) - -2. The right sidebar pops up a list of scanned operators from the entire project, and displays a detailed list containing information such as name, URL, etc. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image106.jpg) - -3. In the upper box you can select a single file, and in the lower box the operators in this file will be shown separately, and the file selection can be switched at will. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image107.jpg) - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image108.jpg) - -4. Click the blue words, and another column will automatically open at the top to show the page. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image109.jpg) - -5. Click the "export" button in the upper right corner to export the content to a csv table. - - ![img](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/devtoolkit/docs/source_zh_cn/images/clip_image110.jpg) diff --git a/docs/devtoolkit/docs/source_en/api_search.md b/docs/devtoolkit/docs/source_en/api_search.md deleted file mode 100644 index fdc498109f16503d20ee17297fe71cb3491de439..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_en/api_search.md +++ /dev/null @@ -1,29 +0,0 @@ -# API Mapping - API Search - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/api_search.md) - -## Functions - -* You can quickly search for MindSpore APIs and view API details in the sidebar. -* If you use other machine learning frameworks, you can search for APIs of other mainstream frameworks to match MindSpore APIs. -* The data version of API mapping supports switching, and please refer to the section [API Mapping - Version Switching](https://www.mindspore.cn/devtoolkit/docs/en/master/PyCharm_change_version.html) for details. - -## Procedure - -1. Double-click **Shift**. The global search page is displayed. - - ![img](images/clip_image060.jpg) - -2. Click **MindSpore**. - - ![img](images/clip_image062.jpg) - -3. Search for a PyTorch or TensorFlow API to obtain the mapping between the PyTorch or TensorFlow API and the MindSpore API. - - ![img](images/clip_image064.jpg) - - ![img](images/clip_image066.jpg) - -4. Click an item in the list to view its official document in the sidebar. - - ![img](images/clip_image068.jpg) diff --git a/docs/devtoolkit/docs/source_en/compiling.md b/docs/devtoolkit/docs/source_en/compiling.md deleted file mode 100644 index b7b531c1ea2459f340cc18c62317014f6cf2faa5..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_en/compiling.md +++ /dev/null @@ -1,87 +0,0 @@ -# Source Code Compilation Guide - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/compiling.md) - -The following describes how to compile the MindSpore Dev ToolKit project based on the IntelliJ IDEA source code. - -## Background - -* MindSpore Dev ToolKit is a PyCharm plug-in developed using IntelliJ IDEA. [IntelliJ IDEA](https://www.jetbrains.com/idea/download) and PyCharm are IDEs developed by JetBrains. -* MindSpore Dev ToolKit is developed based on JDK 11. To learn JDK- and Java-related knowledge, visit [https://jdk.java.net/](https://jdk.java.net/). -* [Gradle 6.6.1](https://gradle.org) is used to build MindSpore Dev Toolkit and it does not need to be installed in advance. IntelliJ IDEA automatically uses the "gradle wrapper" mechanism to configure the required Gradle based on the code. - -## Required Software - -* [IntelliJ IDEA](https://www.jetbrains.com/idea/download) - -* JDK 11 - - Note: IntelliJ IDEA 2021.3 contains a built-in JDK named **jbr-11 JetBrains Runtime version 11.0.10**, which can be used directly. - - ![img](images/clip_image031.jpg) - -## Compilation - -1. Verify that the required software has been successfully configured. - -2. Download the [project](https://gitee.com/mindspore/ide-plugin) source code from the code repository. - - * Download the ZIP package. - - ![img](images/clip_image032.jpg) - - * Run the git command to download the package. - - ``` - git clone https://gitee.com/mindspore/ide-plugin.git - ``` - -3. Use IntelliJ IDEA to open the project. - - 3.1 Choose **File** > **Open**. - - ![img](images/clip_image033.jpg) - - 3.2 Go to the directory for storing the project. - - ![img](images/clip_image034.jpg) - - 3.3 Click **Load** in the dialog box that is displayed in the lower right corner. Alternatively, click **pycharm**, right-click the **settings.gradle** file, and choose **Link Gradle Project** from the shortcut menu. - - ![img](images/clip_image035.jpg) - - ![img](images/clip_image036.jpg) - -4. If the system displays a message indicating that no JDK is available, select a JDK. ***Skip this step if the JDK is available.*** - - 4.1 The following figure shows the situation when the JDK is not available. - - ![img](images/clip_image037.jpg) - - 4.2 Choose **File** > **Project Structure**. - - ![img](images/clip_image038.jpg) - - 4.3 Select JDK 11. - - ![img](images/clip_image039.jpg) - -5. Wait until the synchronization is complete. - - ![img](images/clip_image040.jpg) - -6. Build a project. - - ![img](images/clip_image042.jpg) - -7. Wait till the build is complete. - - ![img](images/clip_image044.jpg) - -8. Obtain the plug-in installation package from the **/pycharm/build/distributions** directory in the project directory. - - ![img](images/clip_image046.jpg) - -## References - -* This project is built based on section [Building Plugins with Gradle](https://plugins.jetbrains.com/docs/intellij/gradle-build-system.html) in *IntelliJ Platform Plugin SDK*. For details about advanced functions such as debugging, see the official document. diff --git a/docs/devtoolkit/docs/source_en/conf.py b/docs/devtoolkit/docs/source_en/conf.py deleted file mode 100644 index c9a9f1473388d49e636fa4e02eca883d57e54de1..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_en/conf.py +++ /dev/null @@ -1,85 +0,0 @@ -# Configuration file for the Sphinx documentation builder. -# -# This file only contains a selection of the most common options. For a full -# list see the documentation: -# https://www.sphinx-doc.org/en/master/usage/configuration.html - -# -- Path setup -------------------------------------------------------------- - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# -import os -import re - -# -- Project information ----------------------------------------------------- - -project = 'MindSpore Dev Toolkit' -copyright = 'MindSpore' -author = 'MindSpore' - -# The full version, including alpha/beta/rc tags -release = 'master' - - -# -- General configuration --------------------------------------------------- - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -myst_enable_extensions = ["dollarmath", "amsmath"] - - -myst_heading_anchors = 5 -extensions = [ - 'myst_parser', - 'sphinx.ext.autodoc' -] - -source_suffix = { - '.rst': 'restructuredtext', - '.md': 'markdown', -} - -# Add any paths that contain templates here, relative to this directory. -templates_path = ['_templates'] - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This pattern also affects html_static_path and html_extra_path. -mathjax_path = 'https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/mathjax/MathJax-3.2.2/es5/tex-mml-chtml.js' - -mathjax_options = { - 'async':'async' -} - -smartquotes_action = 'De' - -exclude_patterns = [] - -pygments_style = 'sphinx' - -# -- Options for HTML output ------------------------------------------------- - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = 'sphinx_rtd_theme' - -html_search_options = {'dict': '../../../resource/jieba.txt'} - -html_static_path = ['_static'] - -src_release = os.path.join(os.getenv("DT_PATH"), 'RELEASE.md') -des_release = "./RELEASE.md" -with open(src_release, "r", encoding="utf-8") as f: - data = f.read() -if len(re.findall("\n## (.*?)\n",data)) > 1: - content = re.findall("(## [\s\S\n]*?)\n## ", data) -else: - content = re.findall("(## [\s\S\n]*)", data) -#result = content[0].replace('# MindSpore', '#', 1) -with open(des_release, "w", encoding="utf-8") as p: - p.write("# Release Notes"+"\n\n") - p.write(content[0]) \ No newline at end of file diff --git a/docs/devtoolkit/docs/source_en/images/clip_image002.jpg b/docs/devtoolkit/docs/source_en/images/clip_image002.jpg deleted file mode 100644 index 24132302f1552bed6be56b7dd660625448774680..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image002.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image004.jpg b/docs/devtoolkit/docs/source_en/images/clip_image004.jpg deleted file mode 100644 index 7ed0e3729940f514a7bfd61c1a7be22166c0bb02..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image004.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image006.jpg b/docs/devtoolkit/docs/source_en/images/clip_image006.jpg deleted file mode 100644 index e0c323eec249024fe19126ce4c931133564cf7b7..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image006.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image008.jpg b/docs/devtoolkit/docs/source_en/images/clip_image008.jpg deleted file mode 100644 index a071ad67222931372d4b62f7b0cf334a4015e70d..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image008.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image010.jpg b/docs/devtoolkit/docs/source_en/images/clip_image010.jpg deleted file mode 100644 index 43ca88d40bc56d5a5113bc29b97a7b559ef659af..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image010.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image012.jpg b/docs/devtoolkit/docs/source_en/images/clip_image012.jpg deleted file mode 100644 index 0e35c9f219292913a51f1f0d5b7a5e154008620f..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image012.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image014.jpg b/docs/devtoolkit/docs/source_en/images/clip_image014.jpg deleted file mode 100644 index 794de60c7d7e76a58e8d7212e449a1bd8e194b21..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image014.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image015.jpg b/docs/devtoolkit/docs/source_en/images/clip_image015.jpg deleted file mode 100644 index 8172e21f871bed6866b3a91b83252838228d257a..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image015.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image016.jpg b/docs/devtoolkit/docs/source_en/images/clip_image016.jpg deleted file mode 100644 index c836c0cf7898e4757ddb3410dde18e754894dd25..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image016.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image018.jpg b/docs/devtoolkit/docs/source_en/images/clip_image018.jpg deleted file mode 100644 index 777738f7cf60454b7b5f26e6c5a29ba5d55750ff..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image018.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image019.jpg b/docs/devtoolkit/docs/source_en/images/clip_image019.jpg deleted file mode 100644 index ab02b702bfd1c0986adb4a15c1f455b56df0a4a1..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image019.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image020.jpg b/docs/devtoolkit/docs/source_en/images/clip_image020.jpg deleted file mode 100644 index d946c3cb3a851f690a5643b5afe119597aed5b22..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image020.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image021.jpg b/docs/devtoolkit/docs/source_en/images/clip_image021.jpg deleted file mode 100644 index 74672d9513f4a60f77450ae5516cee2060215241..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image021.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image022.jpg b/docs/devtoolkit/docs/source_en/images/clip_image022.jpg deleted file mode 100644 index 6b26f18c7d8bb43db0beb8a8d2bd386489192922..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image022.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image023.jpg b/docs/devtoolkit/docs/source_en/images/clip_image023.jpg deleted file mode 100644 index 5981a0fb25c681417a0bdf24d2392411b41f0faf..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image023.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image024.jpg b/docs/devtoolkit/docs/source_en/images/clip_image024.jpg deleted file mode 100644 index 505e8b6c5c4d81dfd67d91b1dad08f938444368a..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image024.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image025.jpg b/docs/devtoolkit/docs/source_en/images/clip_image025.jpg deleted file mode 100644 index 946e276b6a982303b6b146266037a739e6c2639a..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image025.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image026.jpg b/docs/devtoolkit/docs/source_en/images/clip_image026.jpg deleted file mode 100644 index 8b787215af5cf9ae223c2b33121c3171923d2de2..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image026.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image027.jpg b/docs/devtoolkit/docs/source_en/images/clip_image027.jpg deleted file mode 100644 index aa4d7d4a8a6b503fe29885368547daa535e34796..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image027.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image028.jpg b/docs/devtoolkit/docs/source_en/images/clip_image028.jpg deleted file mode 100644 index 3126f80aecac28e8beaa54dc393122c60dbe1357..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image028.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image029.jpg b/docs/devtoolkit/docs/source_en/images/clip_image029.jpg deleted file mode 100644 index 6587240e4a456f3792fece52bcdcbbed077ca67b..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image029.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image031.jpg b/docs/devtoolkit/docs/source_en/images/clip_image031.jpg deleted file mode 100644 index 2f829b48e72e62525860cfe599e0a4ada82010ca..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image031.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image032.jpg b/docs/devtoolkit/docs/source_en/images/clip_image032.jpg deleted file mode 100644 index 37589efbe0f57c442f665824831a2685d81c8713..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image032.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image033.jpg b/docs/devtoolkit/docs/source_en/images/clip_image033.jpg deleted file mode 100644 index bdca68324cf7ee8f4e9bd18817a82954910e52c9..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image033.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image034.jpg b/docs/devtoolkit/docs/source_en/images/clip_image034.jpg deleted file mode 100644 index 874b10d4b2ca476da16a4d1e749cdb6b31ecb59e..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image034.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image035.jpg b/docs/devtoolkit/docs/source_en/images/clip_image035.jpg deleted file mode 100644 index 0b0465169553e57795320255295b8fa789950522..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image035.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image036.jpg b/docs/devtoolkit/docs/source_en/images/clip_image036.jpg deleted file mode 100644 index c7c6c72819b655884d97637b696d1814e5a7fdbf..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image036.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image037.jpg b/docs/devtoolkit/docs/source_en/images/clip_image037.jpg deleted file mode 100644 index 531e8184e02c43aa177a51c3cc32355cc3df9d42..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image037.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image038.jpg b/docs/devtoolkit/docs/source_en/images/clip_image038.jpg deleted file mode 100644 index a8b4d88190c626139bad49cd42a9f7e908b4d0e4..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image038.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image039.jpg b/docs/devtoolkit/docs/source_en/images/clip_image039.jpg deleted file mode 100644 index 2eab0ceac9c1bd5d8b6ade3d65a6a3ce8b1f8fd4..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image039.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image040.jpg b/docs/devtoolkit/docs/source_en/images/clip_image040.jpg deleted file mode 100644 index a879fb1f12d8b6c4bd02332abf9b3bd734207763..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image040.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image042.jpg b/docs/devtoolkit/docs/source_en/images/clip_image042.jpg deleted file mode 100644 index 2454ade258da6d428c9e23ece2adf7f0291d1a12..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image042.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image044.jpg b/docs/devtoolkit/docs/source_en/images/clip_image044.jpg deleted file mode 100644 index cbff652015c36a5856afc909518f3c0fd22f23ff..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image044.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image046.jpg b/docs/devtoolkit/docs/source_en/images/clip_image046.jpg deleted file mode 100644 index 58a493ea4f69b264fc69cfd3e34f32d5a171c303..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image046.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image050.jpg b/docs/devtoolkit/docs/source_en/images/clip_image050.jpg deleted file mode 100644 index 35cc26d483358550c9a53ce855c2ae483eddb7e1..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image050.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image060.jpg b/docs/devtoolkit/docs/source_en/images/clip_image060.jpg deleted file mode 100644 index 7723975694f7f56d88187a69626343af11efbd23..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image060.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image062.jpg b/docs/devtoolkit/docs/source_en/images/clip_image062.jpg deleted file mode 100644 index 838bc48ab8d77f7dbba9ca02925838a49b19ce53..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image062.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image064.jpg b/docs/devtoolkit/docs/source_en/images/clip_image064.jpg deleted file mode 100644 index fb39e70b78b45af301973ea802a219c482a21590..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image064.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image066.jpg b/docs/devtoolkit/docs/source_en/images/clip_image066.jpg deleted file mode 100644 index 0a596cfb3ef7a79674ff33a7be5c97859cc2b9c4..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image066.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image068.jpg b/docs/devtoolkit/docs/source_en/images/clip_image068.jpg deleted file mode 100644 index 0023ba9236a768001e462d6a10434719f3f733fd..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image068.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image072.jpg b/docs/devtoolkit/docs/source_en/images/clip_image072.jpg deleted file mode 100644 index d1e5fad4192d4cb5821cafe4031dbc4ff599eccd..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image072.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image074.jpg b/docs/devtoolkit/docs/source_en/images/clip_image074.jpg deleted file mode 100644 index 97fa2b21b4029ff75156893f8abdff2e77aa38bd..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image074.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image076.jpg b/docs/devtoolkit/docs/source_en/images/clip_image076.jpg deleted file mode 100644 index e754c7dcd30ee6fa82ab20bde4d66f69aabe2fa7..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image076.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image088.jpg b/docs/devtoolkit/docs/source_en/images/clip_image088.jpg deleted file mode 100644 index 8b85f0727893c4cf6cd258550466ca4f4a340e6e..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image088.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image090.jpg b/docs/devtoolkit/docs/source_en/images/clip_image090.jpg deleted file mode 100644 index a3f405388fd75b23b652bc86475be5fd5e1f48ac..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image090.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image092.jpg b/docs/devtoolkit/docs/source_en/images/clip_image092.jpg deleted file mode 100644 index 68ca9c66fc3f03760873075af20c8a9e28aaab48..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image092.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image093.jpg b/docs/devtoolkit/docs/source_en/images/clip_image093.jpg deleted file mode 100644 index 594b2ceadb2c27290e0339e14b298fa2feffe6a9..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image093.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image094.jpg b/docs/devtoolkit/docs/source_en/images/clip_image094.jpg deleted file mode 100644 index e931a95180d27d55590e73948ebe80a1f81bede1..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image094.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/images/clip_image096.jpg b/docs/devtoolkit/docs/source_en/images/clip_image096.jpg deleted file mode 100644 index 3ed0c88500bb4caffccea4d08aaa3a6310e177bd..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_en/images/clip_image096.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_en/index.rst b/docs/devtoolkit/docs/source_en/index.rst deleted file mode 100644 index a5723cfae54780d9911757eadbe14d0991952572..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_en/index.rst +++ /dev/null @@ -1,61 +0,0 @@ -MindSpore Dev Toolkit -============================ - -MindSpore Dev Toolkit is a development kit supporting the `PyCharm `_ (cross-platform Python IDE) plug-in developed by MindSpore, and provides functions such as `Project creation `_ , `intelligent supplement `_ , `API search `_ , and `Document search `_ . - -MindSpore Dev Toolkit creates the best intelligent computing experience, improve the usability of the MindSpore framework, and facilitate the promotion of the MindSpore ecosystem through deep learning, intelligent search, and intelligent recommendation. - -Code repository address: - -System Requirements ------------------------------- - -- Operating systems supported by the plug-in: - - - Windows 10 - - - Linux - - - macOS (Only the x86 architecture is supported. The code completion function is not available currently.) - -- PyCharm versions supported by the plug-in: - - - 2020.3 - - - 2021.X - - - 2022.X - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: PyCharm Plugin Usage Guide - :hidden: - - PyCharm_plugin_install - compiling - smart_completion - PyCharm_change_version - api_search - api_scanning - knowledge_search - mindspore_project_wizard - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: VSCode Plugin Usage Guide - :hidden: - - VSCode_plugin_install - VSCode_smart_completion - VSCode_change_version - VSCode_api_search - VSCode_api_scan - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: RELEASE NOTES - - RELEASE diff --git a/docs/devtoolkit/docs/source_en/knowledge_search.md b/docs/devtoolkit/docs/source_en/knowledge_search.md deleted file mode 100644 index 91a0606a577644b0a18684e55f0197b05bb72cfb..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_en/knowledge_search.md +++ /dev/null @@ -1,22 +0,0 @@ -# Document Search - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/knowledge_search.md) - -## Functions - -* Recommendation: It provides exact search results based on user habits. -* It provides immersive document search experience to avoid switching between the IDE and browser. It resides on the sidebar to adapt to the page layout. - -## Procedure - -1. Click the sidebar to display the search page. - - ![img](images/clip_image072.jpg) - -2. Enter **API Mapping** and click the search icon to view the result. - - ![img](images/clip_image074.jpg) - -3. Click the home icon to return to the search page. - - ![img](images/clip_image076.jpg) diff --git a/docs/devtoolkit/docs/source_en/mindspore_project_wizard.md b/docs/devtoolkit/docs/source_en/mindspore_project_wizard.md deleted file mode 100644 index b26c4cc49d24905c229a2a1c32d71f05a7f8fd30..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_en/mindspore_project_wizard.md +++ /dev/null @@ -1,103 +0,0 @@ -# Creating a Project - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/mindspore_project_wizard.md) - -## Background - -This function is implemented based on the [conda](https://conda.io). Conda is a package management and environment management system. It is one of the recommended installation modes for MindSpore. - -## Functions - -* It creates a conda environment or selects an existing conda environment, and installs the MindSpore binary package in the conda environment. -* It deploys the best practice template. In addition to testing whether the environment is successfully installed, it also provides a tutorial for MindSpore beginners. -* When the network condition is good, the environment can be installed within 10 minutes and you can experience MindSpore immediately. It reduces up to 80% environment configuration time for beginners. - -## Procedure - -1. Choose **File** > **New Project**. - - ![img](images/clip_image002.jpg) - -2. Select **MindSpore**. - - ![img](images/clip_image004.jpg) - -3. Download and install Miniconda. ***If conda has been installed, skip this step.*** - - 3.1 Click **Install Miniconda Automatically**. - - ![img](images/clip_image006.jpg) - - 3.2 Select an installation folder. **You are advised to use the default path to install conda.** - - ![img](images/clip_image008.jpg) - - 3.3 Click **Install**. - - ![img](images/clip_image010.jpg) - - ![img](images/clip_image012.jpg) - - 3.4 Wait for the installation to complete. - - ![img](images/clip_image014.jpg) - - 3.5 Restart PyCharm as prompted or restart PyCharm later. ***Note: The following steps can be performed only after PyCharm is restarted.*** - - ![img](images/clip_image015.jpg) - -4. If **Conda executable** is not automatically filled, select the path of the installed conda. - - ![img](images/clip_image016.jpg) - -5. Create a conda environment or select an existing conda environment. - - * Create a conda environment. **You are advised to use the default path to create the conda environment. Due to PyCharm restrictions on Linux, you can only select the default directory.** - - ![img](images/clip_image018.jpg) - - * Select an existing conda environment in PyCharm. - - ![img](images/clip_image019.jpg) - -6. Select a hardware environment and a MindSpore best practice template. - - 6.1 Select a hardware environment. - - ![img](images/clip_image020.jpg) - - 6.2 Select a best practice template. The best practice template provides some sample projects for beginners to get familiar with MindSpore. The best practice template can be run directly. - - ![img](images/clip_image021.jpg) - -7. Click **Create** to create a project and wait until MindSpore is successfully downloaded and installed. - - 7.1 Click **Create** to create a MindSpore project. - - ![img](images/clip_image022.jpg) - - 7.2 The conda environment is being created. - - ![img](images/clip_image023.jpg) - - 7.3 MindSpore is being configured through conda. - - ![img](images/clip_image024.jpg) - -8. Wait till the MindSpore project is created. - - ![img](images/clip_image025.jpg) - -9. Check whether the MindSpore project is successfully created. - - * Click **Terminal**, enter **python -c "import mindspore;mindspore.run_check()"**, and check the output. If the version number shown in the following figure is displayed, the MindSpore environment is available. - - ![img](images/clip_image026.jpg) - - * If you select a best practice template, you can run the best practice to test the MindSpore environment. - - ![img](images/clip_image027.jpg) - - ![img](images/clip_image028.jpg) - - ![img](images/clip_image029.jpg) diff --git a/docs/devtoolkit/docs/source_en/smart_completion.md b/docs/devtoolkit/docs/source_en/smart_completion.md deleted file mode 100644 index a3082c79785732ff85a829618bce1c5c8ca72f11..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_en/smart_completion.md +++ /dev/null @@ -1,36 +0,0 @@ -# Code Completion - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_en/smart_completion.md) - -## Functions - -* It completes code based on AI for the MindSpore project. -* You can easily develop MindSpore without installing the MindSpore environment. - -## Procedure - -1. Open a Python file and write code. - - ![img](images/clip_image088.jpg) - -2. During encoding, the code completion function is enabled automatically. Code lines with the "MindSpore" identifier are automatically completed by MindSpore Dev Toolkit. - - ![img](images/clip_image090.jpg) - - ![img](images/clip_image092.jpg) - -## Description - -1. In versions later than PyCharm 2021, the completed code will be rearranged based on machine learning. This behavior may cause the plug-in's completed code to be displayed with lower priority. You can disable this function in **Settings** and use MindSpore Dev Toolkit to sort code. - - ![img](images/clip_image093.jpg) - -2. Comparison before and after this function is disabled. - - * Function disabled - - ![img](images/clip_image094.jpg) - - * Function enabled - - ![img](images/clip_image096.jpg) diff --git a/docs/devtoolkit/docs/source_zh_cn/PyCharm_change_version.md b/docs/devtoolkit/docs/source_zh_cn/PyCharm_change_version.md deleted file mode 100644 index d0d553574bedc10d411ff45310718a05903b981d..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_zh_cn/PyCharm_change_version.md +++ /dev/null @@ -1,39 +0,0 @@ -# API映射 - API版本切换 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/PyCharm_change_version.md) - -## 简介 - -API 映射指PyTorch API与MindSpore API的映射关系。 -在MindSpore Dev Toolkit中,提供了API映射搜索和API映射扫描两大功能,且用户可以自由切换API映射数据的版本。 - -## API映射数据版本切换 - -1. 插件启动时,默认使用与插件目前版本相同的API映射数据版本。API映射数据版本在右下显示,此版本号仅影响本章节的API映射功能,不会改变环境中的MindSpore版本。 - - ![img](./images/clip_image137.jpg) - -2. 点击API映射数据版本,弹出选择列表。可以选择点击预设版本切换至其他版本,也可以选择"other version"输入其他版本号尝试切换。 - - ![img](./images/clip_image138.jpg) - -3. 点击任意版本号,开始切换版本。下方有动画提示正在切换的状态。 - - ![img](./images/clip_image139.jpg) - -4. 若想自定义输入版本号,在选择列表中选择"other version"的选项,在弹框中输入版本号,点击ok开始切换版本。注:请按照2.1或2.1.0的格式输入版本号,否则点击ok键会没有反应。 - - ![img](./images/clip_image140.jpg) - -5. 若切换成功,右下状态栏展示切换后的API映射数据版本信息。 - - ![img](./images/clip_image141.jpg) - -6. 若切换失败,右下状态栏展示切换前的API映射数据版本信息。版本号不存在、网络错误会导致切换失败,请排查后再次尝试。如需查看最新文档,可以切换到master版本。 - - ![img](./images/clip_image142.jpg) - -7. 当自定义输入的版本号切换成功后,此版本号会加入到版本列表中展示。 - - ![img](./images/clip_image143.jpg) - diff --git a/docs/devtoolkit/docs/source_zh_cn/PyCharm_plugin_install.md b/docs/devtoolkit/docs/source_zh_cn/PyCharm_plugin_install.md deleted file mode 100644 index 9622c7c3159ed08594dc123af9bffaf56fafc354..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_zh_cn/PyCharm_plugin_install.md +++ /dev/null @@ -1,13 +0,0 @@ -# PyCharm 插件安装 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/PyCharm_plugin_install.md) - -## 安装步骤 - -1. 获取[插件Zip包](https://ms-release.obs.cn-north-4.myhuaweicloud.com/2.1.0/IdePlugin/any/MindSpore_Dev_ToolKit-2.1.0.zip)。 -2. 启动Pycharm,单击左上菜单栏,选择File->Settings->Plugins->Install Plugin from Disk。 - 如图: - - ![image-20211223175637989](./images/clip_image050.jpg) - -3. 选择插件zip包。 \ No newline at end of file diff --git a/docs/devtoolkit/docs/source_zh_cn/VSCode_api_scan.md b/docs/devtoolkit/docs/source_zh_cn/VSCode_api_scan.md deleted file mode 100644 index b639ac7c0951b3011232d5723c9cb27c65e12e35..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_zh_cn/VSCode_api_scan.md +++ /dev/null @@ -1,49 +0,0 @@ -# API映射 - API扫描 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/VSCode_api_scan.md) - -## 功能介绍 - -* 快速扫描代码中出现的API,在侧边栏直接展示API详情。 -* 为方便其他机器学习框架用户,通过扫描代码中出现的主流框架API,联想匹配对应MindSpore API。 -* API映射的数据版本支持切换,详情请参考[API映射-版本切换](https://www.mindspore.cn/devtoolkit/docs/zh-CN/master/VSCode_change_version.html)章节。 - -## 文件级API映射扫描 - -1. 在当前文件任意位置处右键,打开菜单,选择“扫描本地文件”。 - - ![img](./images/clip_image116.jpg) - -2. 右边栏会弹出当前文件中扫描出的算子,包括“可以转化的PyTorch API”、“可能是torch.Tensor API的结果”、“暂未提供直接映射关系的PyTorch API”三种扫描结果列表。 - - 其中: - - * "可以转换的PyTorch API"指在文件中被使用的且可以转换为MindSpore API的PyTorch API - * "可能是torch.Tensor API"指名字和torch.Tensor的API名字相同,可能是torch.Tensor的API且可以转换为MindSpore API的API - * "暂未提供直接映射关系的PyTorch API"指虽然是PyTorch API或可能是torch.Tensor的API,但是暂时没有直接对应为MindSpore API的API - - ![img](./images/clip_image117.jpg) - -## 项目级API映射扫描 - -1. 点击Visual Studio Code左侧边栏MindSpore API映射扫描图标。 - - ![img](./images/clip_image118.jpg) - -2. 左边栏会生成当前IDE工程中仅含Python文件的工程树视图。 - - ![img](./images/clip_image119.jpg) - -3. 若选择视图中单个Python文件,可获取该文件的算子扫描结果列表。 - - ![img](./images/clip_image120.jpg) - -4. 若选择视图中文件目录,可获取该目录下所有Python文件的算子扫描结果列表。 - - ![img](./images/clip_image121.jpg) - -5. 蓝色字体部分均可以点击,会自动在用户默认浏览器中打开网页。 - - ![img](./images/clip_image122.jpg) - - ![img](./images/clip_image123.jpg) \ No newline at end of file diff --git a/docs/devtoolkit/docs/source_zh_cn/VSCode_api_search.md b/docs/devtoolkit/docs/source_zh_cn/VSCode_api_search.md deleted file mode 100644 index e57ed777648e995b432f9a96ec58bc82a64502c3..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_zh_cn/VSCode_api_search.md +++ /dev/null @@ -1,29 +0,0 @@ -# API映射 - API搜索 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/VSCode_api_search.md) - -## 功能介绍 - -* 快速搜索MindSpore API,在侧边栏直接展示API详情。 -* 为方便其他机器学习框架用户,通过搜索其他主流框架API,联想匹配对应MindSpore API。 -* API映射的数据版本支持切换,详情请参考[API映射-版本切换](https://www.mindspore.cn/devtoolkit/docs/zh-CN/master/VSCode_change_version.html)章节。 - -## 使用步骤 - -1. 点击Visual Studio Code左侧边栏MindSpore API映射搜索图标。 - - ![img](./images/clip_image124.jpg) - -2. 左侧边栏会生成一个输入框。 - - ![img](./images/clip_image125.jpg) - -3. 在输入框中输入任意单词,下方会展示出当前关键词的搜索结果,且搜索结果根据输入内容实时更新。 - - ![img](./images/clip_image126.jpg) - -4. 点击任意搜索结果,会在用户默认浏览器中打开网页。 - - ![img](./images/clip_image127.jpg) - - ![img](./images/clip_image128.jpg) diff --git a/docs/devtoolkit/docs/source_zh_cn/VSCode_change_version.md b/docs/devtoolkit/docs/source_zh_cn/VSCode_change_version.md deleted file mode 100644 index 2822c877f55f06ea1c2927f850fb32582cc9dc2c..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_zh_cn/VSCode_change_version.md +++ /dev/null @@ -1,39 +0,0 @@ -# API映射 - 版本切换 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/VSCode_change_version.md) - -## 简介 - -API 映射指PyTorch API与MindSpore API的映射关系。在MindSpore Dev Toolkit中,提供了API映射搜索和API映射扫描两大功能,且用户可以自由切换API映射数据的版本。 - -## API映射数据版本切换 - -1. 不同版本的API映射数据会导致不同的API映射扫描和API映射搜索结果,但不会影响环境中的MindSpore版本。默认版本与插件版本一致,版本信息会展示在左下角状态栏。 - - ![img](./images/clip_image129.jpg) - -2. 点击此状态栏,页面上方会弹出下拉框,包含了默认可以切换的版本号选项。用户可以点击任意版本号切换版本,或者点击”自定义输入“的选项以后,在再次弹出的输入框中输入其他版本号切换版本。 - - ![img](./images/clip_image130.jpg) - -3. 点击任意版本号,开始切换版本,左下角状态栏提示版本切换中的状态。 - - ![img](./images/clip_image131.jpg) - -4. 若想自定义输入版本号,在下拉框时点击“自定义输入”的选项,下拉框转变为输入框,按照2.1或2.1.0的格式输入版本号,按回车键开始切换版本,左下角状态栏提示切换中的状态。 - - ![img](./images/clip_image132.jpg) - - ![img](./images/clip_image133.jpg) - -5. 若切换成功,右下角信息提示切换成功,左下角状态栏展示切换后的API映射数据版本信息。 - - ![img](./images/clip_image134.jpg) - -6. 若切换失败,右下角信息提示切换失败,左下角状态栏展示切换前的API映射数据版本信息。版本号不存在、网络错误会导致切换失败,请排查后再次尝试。如需查看最新文档,可以切换到master版本。 - - ![img](./images/clip_image135.jpg) - -7. 当自定义输入的版本号切换成功后,此版本号会加入到下拉框中展示。 - - ![img](./images/clip_image136.jpg) \ No newline at end of file diff --git a/docs/devtoolkit/docs/source_zh_cn/VSCode_plugin_install.md b/docs/devtoolkit/docs/source_zh_cn/VSCode_plugin_install.md deleted file mode 100644 index 14ee27f84e661910b7db7959533b337050d0adf3..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_zh_cn/VSCode_plugin_install.md +++ /dev/null @@ -1,18 +0,0 @@ -# Visual Studio Code 插件安装 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/VSCode_plugin_install.md) - -## 安装步骤 - -1. 获取[插件vsix包](https://ms-release.obs.cn-north-4.myhuaweicloud.com/2.1.0/IdePlugin/any/mindspore-dev-toolkit-2.1.0.vsix)。 -2. 点击左侧第五个按钮“Extensions”,点击右上角三个点,再点击“Install from VSIX...” - - ![img](./images/clip_image112.jpg) - -3. 从文件夹中选择下载好的vsix文件,插件自动开始安装。右下角提示"Completed installing MindSpore Dev Toolkit extension from VSIX",则插件安装成功。 - - ![img](./images/clip_image113.jpg) - -4. 点击左边栏的刷新按钮,能看到”INSTALLED“目录中有”MindSpore Dev Toolkit"插件,至此插件安装成功。 - - ![img](./images/clip_image114.jpg) \ No newline at end of file diff --git a/docs/devtoolkit/docs/source_zh_cn/VSCode_smart_completion.md b/docs/devtoolkit/docs/source_zh_cn/VSCode_smart_completion.md deleted file mode 100644 index 80634d7f3c87c03e78e3f85f4b3e5bac03ca3b24..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_zh_cn/VSCode_smart_completion.md +++ /dev/null @@ -1,22 +0,0 @@ -# 代码补全 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/VSCode_smart_completion.md) - -## 功能介绍 - -* 提供基于MindSpore项目的AI代码补全。 -* 无需安装MindSpore环境,也可轻松开发MindSpore。 - -## 使用步骤 - -1. 第一次安装或使用插件时,会自动下载模型,右下角出现"开始下载Model","下载Model成功"提示则表示模型下载且启动成功。若网速较慢,模型需要花费十余分钟下载。下载完成后才会出现"下载Model成功"的字样。若非第一次使用,将不会出现提示。 - - ![img](./images/clip_image115.jpg) - -2. 打开Python文件编写代码。 - - ![img](./images/clip_image097.jpg) - -3. 编码时,补全会自动生效。有MindSpore Dev Toolkit后缀名称的为此插件智能补全提供的代码。 - - ![img](./images/clip_image111.jpg) \ No newline at end of file diff --git a/docs/devtoolkit/docs/source_zh_cn/api_scanning.md b/docs/devtoolkit/docs/source_zh_cn/api_scanning.md deleted file mode 100644 index f425da32cad074e934b0bf199184ac67d20368ce..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_zh_cn/api_scanning.md +++ /dev/null @@ -1,62 +0,0 @@ -# API映射 - API扫描 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/api_scanning.md) - -## 功能介绍 - -* 快速扫描代码中出现的API,在侧边栏直接展示API详情。 -* 为方便其他机器学习框架用户,通过扫描代码中出现的主流框架API,联想匹配对应MindSpore API。 -* API映射的数据版本支持切换,详情请参考[API映射-版本切换](https://www.mindspore.cn/devtoolkit/docs/zh-CN/master/PyCharm_change_version.html)章节。 - -## 使用步骤 - -### 文件级别API扫描 - -1. 在当前文件任意位置处点击鼠标右键,打开菜单,点击菜单最上方的"API scan"。 - - ![img](./images/clip_image100.jpg) - -2. 右边栏会自动弹出,展示扫描出的API,并展示包含名称,网址等信息的详细列表。若本文件中未扫描到API,则不会弹出窗口。 - - 其中: - - * "可以转换为MindSpore API的PyTorch/TensorFlow API"指在文件中被使用的且可以转换为MindSpore API的PyTorch或TensorFlow API - * "暂时不能转换的API"指虽然是PyTorch或TensorFlow API的API,但是暂时没有直接对应为MindSpore API的API - * "可能是PyTorch/TensorFlow API的情况"指因为链式调用的原因,有可能是PyTorch或TensorFlow的API的可转换情况 - * TensorFlow API扫描是实验性功能 - - ![img](./images/clip_image101.jpg) - -3. 蓝色字体的部分均可以点击,会自动在上方再打开一栏,展示网页。 - - ![img](./images/clip_image102.jpg) - -4. 点击右上角"导出"按钮,可将内容导出到csv表格。 - - ![img](./images/clip_image103.jpg) - -### 项目级别API扫描 - -1. 在当前文件任意位置处点击鼠标右键,打开菜单,点击菜单上方第二个"API scan project-level",或在上方工具栏选择"Tools",再选择"API scan project-level"。 - - ![img](./images/clip_image104.jpg) - - ![img](./images/clip_image105.jpg) - -2. 右边栏会弹出整个项目中扫描出的API,并展示包含名称,网址等信息的详细列表。 - - ![img](./images/clip_image106.jpg) - -3. 在上方框中可以选择单个文件,下方框中将单独展示此文件中的API,文件选择可以随意切换。 - - ![img](./images/clip_image107.jpg) - - ![img](./images/clip_image108.jpg) - -4. 蓝色字体部分均可以点击,会自动在上方再打开一栏,展示网页。 - - ![img](./images/clip_image109.jpg) - -5. 点击"导出"按钮,可将内容导出到csv表格。 - - ![img](./images/clip_image110.jpg) diff --git a/docs/devtoolkit/docs/source_zh_cn/api_search.md b/docs/devtoolkit/docs/source_zh_cn/api_search.md deleted file mode 100644 index 948e7d70cc9131ce71b3571232d025bce5c70b09..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_zh_cn/api_search.md +++ /dev/null @@ -1,29 +0,0 @@ -# API映射 - API互搜 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/api_search.md) - -## 功能介绍 - -* 快速搜索MindSpore API,在侧边栏直接展示API详情。 -* 为方便其他机器学习框架用户,通过搜索其他主流框架API,联想匹配对应MindSpore API。 -* API映射的数据版本支持切换,详情请参考[API映射-版本切换](https://www.mindspore.cn/devtoolkit/docs/zh-CN/master/PyCharm_change_version.html)章节。 - -## 使用步骤 - -1. 双击shift弹出全局搜索页面。 - - ![img](images/clip_image060.jpg) - -2. 选择MindSpore。 - - ![img](images/clip_image062.jpg) - -3. 输入要搜索的PyTorch或TensorFlow的API,获取与MindSpore API的对应关系列表。 - - ![img](images/clip_image064.jpg) - - ![img](images/clip_image066.jpg) - -4. 点击列表中的条目,可以在右边侧边栏浏览对应条目的官网文档。 - - ![img](images/clip_image068.jpg) diff --git a/docs/devtoolkit/docs/source_zh_cn/compiling.md b/docs/devtoolkit/docs/source_zh_cn/compiling.md deleted file mode 100644 index ce30f02e48dbe63d091a3d4d99ee64dddad2b5e7..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_zh_cn/compiling.md +++ /dev/null @@ -1,86 +0,0 @@ -# 源码编译指导 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/compiling.md) - -本文档介绍如何基于IntelliJ IDEA源码编译MindSpore Dev ToolKit项目。 - -## 背景介绍 - -* MindSpore Dev ToolKit是一个PyCharm插件,需使用IntelliJ IDEA开发。[IntelliJ IDEA](https://www.jetbrains.com/idea/download)与Pycharm均为JetBrains公司开发的IDE。 -* MindSpore Dev ToolKit 基于JDK 11开发。 如果您不了解JDK,请访问[https://jdk.java.net/](https://jdk.java.net/)了解并学习JDK以及java的相关知识。 -* MindSpore Dev ToolKit使用[Gradle](https://gradle.org)6.6.1构建,但无需提前安装。IntelliJ IDEA会自动根据代码使用"gradle wrapper"机制配置好所需的gradle。 - -## 依赖软件 - -* 确认安装[IntelliJ IDEA](https://www.jetbrains.com/idea/download)。 - -* 确认安装JDK 11版本。 - 注:2021.3版本的IntelliJ IDEA自带一个名为jbr-11 JetBrains Runtime version 11.0.10的JDK,可以直接使用。 - - ![img](images/clip_image031.jpg) - -## 编译 - -1. 保证依赖软件均已成功配置。 - -2. 从代码仓下载[本项目](https://gitee.com/mindspore/ide-plugin)源码。 - - * 直接下载代码的zip包。 - - ![img](images/clip_image032.jpg) - - * 使用git下载。 - - ``` - git clone https://gitee.com/mindspore/ide-plugin.git - ``` - -3. 使用IntelliJ IDEA打开项目。 - - 3.1 选择File选项卡下的Open选项。***File -> Open*** - - ![img](images/clip_image033.jpg) - - 3.2 打开下载项目文件位置。 - - ![img](images/clip_image034.jpg) - - 3.3 点击右下角弹窗中的load或右键pycharm/settings.gradle文件选中Link Gradle Project。 - - ![img](images/clip_image035.jpg) - - ![img](images/clip_image036.jpg) - -4. 如果提示没有JDK,请选择一个JDK。***有JDK可以跳过此步骤*** - - 4.1 没有JDK情况下,页面如下图显示。 - - ![img](images/clip_image037.jpg) - - 4.2 File->Project Structure。 - - ![img](images/clip_image038.jpg) - - 4.3 选择JDK11。 - - ![img](images/clip_image039.jpg) - -5. 等待同步完成。 - - ![img](images/clip_image040.jpg) - -6. 构建项目。 - - ![img](images/clip_image042.jpg) - -7. 构建完成。 - - ![img](images/clip_image044.jpg) - -8. 构建完成后至项目目录下/pycharm/build/distributions目录下获取插件安装包。 - - ![img](images/clip_image046.jpg) - -## 相关参考文档 - -* 本项目构建基于IntelliJ Platform Plugin SDK之[Building Plugins with Gradle](https://plugins.jetbrains.com/docs/intellij/gradle-build-system.html)章节。如需了解调试等进阶功能,请阅读官方文档。 \ No newline at end of file diff --git a/docs/devtoolkit/docs/source_zh_cn/conf.py b/docs/devtoolkit/docs/source_zh_cn/conf.py deleted file mode 100644 index 6e39e02d1521466710d67e74938fcbb94bdf6d6e..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_zh_cn/conf.py +++ /dev/null @@ -1,89 +0,0 @@ -# Configuration file for the Sphinx documentation builder. -# -# This file only contains a selection of the most common options. For a full -# list see the documentation: -# https://www.sphinx-doc.org/en/master/usage/configuration.html - -# -- Path setup -------------------------------------------------------------- - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# -import os -import re - -# -- Project information ----------------------------------------------------- - -project = 'MindSpore Dev Toolkit' -copyright = 'MindSpore' -author = 'MindSpore' - -# The full version, including alpha/beta/rc tags -release = 'master' - - -# -- General configuration --------------------------------------------------- - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -myst_enable_extensions = ["dollarmath", "amsmath"] - - -myst_heading_anchors = 5 -extensions = [ - 'myst_parser', - 'sphinx.ext.autodoc' -] - -source_suffix = { - '.rst': 'restructuredtext', - '.md': 'markdown', -} - -# Add any paths that contain templates here, relative to this directory. -templates_path = ['_templates'] - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This pattern also affects html_static_path and html_extra_path. -mathjax_path = 'https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/mathjax/MathJax-3.2.2/es5/tex-mml-chtml.js' - -mathjax_options = { - 'async':'async' -} - -smartquotes_action = 'De' - -exclude_patterns = [] - -pygments_style = 'sphinx' - -# -- Options for HTML output ------------------------------------------------- - -language = 'zh_CN' -locale_dirs = ['../../../../resource/locale/'] -gettext_compact = False - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = 'sphinx_rtd_theme' - -html_search_options = {'dict': '../../../resource/jieba.txt'} - -html_static_path = ['_static'] - -src_release = os.path.join(os.getenv("DT_PATH"), 'RELEASE_CN.md') -des_release = "./RELEASE.md" -with open(src_release, "r", encoding="utf-8") as f: - data = f.read() -if len(re.findall("\n## (.*?)\n",data)) > 1: - content = re.findall("(## [\s\S\n]*?)\n## ", data) -else: - content = re.findall("(## [\s\S\n]*)", data) -#result = content[0].replace('# MindSpore', '#', 1) -with open(des_release, "w", encoding="utf-8") as p: - p.write("# Release Notes"+"\n\n") - p.write(content[0]) \ No newline at end of file diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image002.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image002.jpg deleted file mode 100644 index 24132302f1552bed6be56b7dd660625448774680..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image002.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image004.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image004.jpg deleted file mode 100644 index 7ed0e3729940f514a7bfd61c1a7be22166c0bb02..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image004.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image006.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image006.jpg deleted file mode 100644 index e0c323eec249024fe19126ce4c931133564cf7b7..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image006.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image008.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image008.jpg deleted file mode 100644 index a071ad67222931372d4b62f7b0cf334a4015e70d..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image008.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image010.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image010.jpg deleted file mode 100644 index 43ca88d40bc56d5a5113bc29b97a7b559ef659af..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image010.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image012.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image012.jpg deleted file mode 100644 index 0e35c9f219292913a51f1f0d5b7a5e154008620f..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image012.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image014.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image014.jpg deleted file mode 100644 index 794de60c7d7e76a58e8d7212e449a1bd8e194b21..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image014.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image015.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image015.jpg deleted file mode 100644 index 8172e21f871bed6866b3a91b83252838228d257a..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image015.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image016.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image016.jpg deleted file mode 100644 index c836c0cf7898e4757ddb3410dde18e754894dd25..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image016.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image018.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image018.jpg deleted file mode 100644 index 777738f7cf60454b7b5f26e6c5a29ba5d55750ff..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image018.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image019.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image019.jpg deleted file mode 100644 index ab02b702bfd1c0986adb4a15c1f455b56df0a4a1..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image019.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image020.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image020.jpg deleted file mode 100644 index d946c3cb3a851f690a5643b5afe119597aed5b22..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image020.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image021.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image021.jpg deleted file mode 100644 index 74672d9513f4a60f77450ae5516cee2060215241..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image021.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image022.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image022.jpg deleted file mode 100644 index 6b26f18c7d8bb43db0beb8a8d2bd386489192922..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image022.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image023.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image023.jpg deleted file mode 100644 index 5981a0fb25c681417a0bdf24d2392411b41f0faf..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image023.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image024.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image024.jpg deleted file mode 100644 index 505e8b6c5c4d81dfd67d91b1dad08f938444368a..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image024.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image025.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image025.jpg deleted file mode 100644 index 946e276b6a982303b6b146266037a739e6c2639a..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image025.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image026.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image026.jpg deleted file mode 100644 index 8b787215af5cf9ae223c2b33121c3171923d2de2..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image026.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image027.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image027.jpg deleted file mode 100644 index aa4d7d4a8a6b503fe29885368547daa535e34796..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image027.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image028.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image028.jpg deleted file mode 100644 index 3126f80aecac28e8beaa54dc393122c60dbe1357..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image028.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image029.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image029.jpg deleted file mode 100644 index 6587240e4a456f3792fece52bcdcbbed077ca67b..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image029.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image031.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image031.jpg deleted file mode 100644 index 2f829b48e72e62525860cfe599e0a4ada82010ca..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image031.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image032.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image032.jpg deleted file mode 100644 index 37589efbe0f57c442f665824831a2685d81c8713..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image032.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image033.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image033.jpg deleted file mode 100644 index bdca68324cf7ee8f4e9bd18817a82954910e52c9..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image033.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image034.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image034.jpg deleted file mode 100644 index 874b10d4b2ca476da16a4d1e749cdb6b31ecb59e..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image034.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image035.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image035.jpg deleted file mode 100644 index 0b0465169553e57795320255295b8fa789950522..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image035.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image036.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image036.jpg deleted file mode 100644 index c7c6c72819b655884d97637b696d1814e5a7fdbf..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image036.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image037.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image037.jpg deleted file mode 100644 index 531e8184e02c43aa177a51c3cc32355cc3df9d42..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image037.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image038.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image038.jpg deleted file mode 100644 index a8b4d88190c626139bad49cd42a9f7e908b4d0e4..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image038.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image039.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image039.jpg deleted file mode 100644 index 2eab0ceac9c1bd5d8b6ade3d65a6a3ce8b1f8fd4..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image039.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image040.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image040.jpg deleted file mode 100644 index a879fb1f12d8b6c4bd02332abf9b3bd734207763..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image040.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image042.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image042.jpg deleted file mode 100644 index 2454ade258da6d428c9e23ece2adf7f0291d1a12..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image042.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image044.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image044.jpg deleted file mode 100644 index cbff652015c36a5856afc909518f3c0fd22f23ff..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image044.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image046.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image046.jpg deleted file mode 100644 index 58a493ea4f69b264fc69cfd3e34f32d5a171c303..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image046.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image050.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image050.jpg deleted file mode 100644 index 35cc26d483358550c9a53ce855c2ae483eddb7e1..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image050.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image060.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image060.jpg deleted file mode 100644 index 7723975694f7f56d88187a69626343af11efbd23..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image060.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image062.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image062.jpg deleted file mode 100644 index 838bc48ab8d77f7dbba9ca02925838a49b19ce53..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image062.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image064.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image064.jpg deleted file mode 100644 index fb39e70b78b45af301973ea802a219c482a21590..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image064.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image066.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image066.jpg deleted file mode 100644 index 0a596cfb3ef7a79674ff33a7be5c97859cc2b9c4..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image066.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image068.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image068.jpg deleted file mode 100644 index 0023ba9236a768001e462d6a10434719f3f733fd..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image068.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image072.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image072.jpg deleted file mode 100644 index d1e5fad4192d4cb5821cafe4031dbc4ff599eccd..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image072.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image074.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image074.jpg deleted file mode 100644 index 97fa2b21b4029ff75156893f8abdff2e77aa38bd..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image074.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image076.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image076.jpg deleted file mode 100644 index e754c7dcd30ee6fa82ab20bde4d66f69aabe2fa7..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image076.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image088.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image088.jpg deleted file mode 100644 index 8b85f0727893c4cf6cd258550466ca4f4a340e6e..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image088.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image090.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image090.jpg deleted file mode 100644 index a3f405388fd75b23b652bc86475be5fd5e1f48ac..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image090.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image092.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image092.jpg deleted file mode 100644 index 68ca9c66fc3f03760873075af20c8a9e28aaab48..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image092.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image093.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image093.jpg deleted file mode 100644 index 594b2ceadb2c27290e0339e14b298fa2feffe6a9..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image093.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image094.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image094.jpg deleted file mode 100644 index e931a95180d27d55590e73948ebe80a1f81bede1..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image094.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image096.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image096.jpg deleted file mode 100644 index 3ed0c88500bb4caffccea4d08aaa3a6310e177bd..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image096.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image097.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image097.jpg deleted file mode 100644 index 0cb303bea0e9e88bf56fe22806c91605f1822606..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image097.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image100.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image100.jpg deleted file mode 100644 index 7dd66fa814e1dcc67b30e41e05ff2d36a2cae2a8..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image100.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image101.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image101.jpg deleted file mode 100644 index 656ec720e30e09c72ea2c61c9caad41282a7f923..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image101.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image102.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image102.jpg deleted file mode 100644 index 973c9940bc8e72ed2355026c98f9885f30096ba4..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image102.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image103.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image103.jpg deleted file mode 100644 index e945dc73adf0b3755bbed6f54b8f6255d7d8f3fc..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image103.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image104.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image104.jpg deleted file mode 100644 index 6a20059ff34b48f657c7d7d998597c7f1332d220..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image104.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image105.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image105.jpg deleted file mode 100644 index 62606f6b5b4cc9eabea35e79f2da9dae45b91c29..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image105.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image106.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image106.jpg deleted file mode 100644 index 60a0c13f748cf1265ee43a766a86041fc1abe7c6..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image106.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image107.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image107.jpg deleted file mode 100644 index 19e59bb533d230c64fd12b2a0f4abb5156428695..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image107.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image108.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image108.jpg deleted file mode 100644 index 14bfeaaf6bc7416d4b53aeb85dda5883dfb22aa9..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image108.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image109.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image109.jpg deleted file mode 100644 index 3a155082deeab5246182f3fceb28cc1b65e709e1..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image109.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image110.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image110.jpg deleted file mode 100644 index dee5a5107e59244dc24c1a41a928b3aaf705b052..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image110.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image111.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image111.jpg deleted file mode 100644 index 6324111bd6e2bf8f9f62429058b4e3f5fd5b36b9..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image111.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image112.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image112.jpg deleted file mode 100644 index c71a61026a48cd7f1732b4b9d41c00d0c03bc521..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image112.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image113.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image113.jpg deleted file mode 100644 index d44ede801205f4bf4981cf751ca49f5df3324c2e..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image113.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image114.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image114.jpg deleted file mode 100644 index 43dbe99fc91ee7907928e9dabe5e50b1fd202fef..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image114.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image115.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image115.jpg deleted file mode 100644 index e676224e6be68f2b770d8effac56ab5b3f433e99..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image115.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image116.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image116.jpg deleted file mode 100644 index 3c0569c618cb45d19783b5034209050ad1dee716..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image116.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image117.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image117.jpg deleted file mode 100644 index 1f3d079ed853fef95ce56f258b489d31117da8cf..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image117.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image118.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image118.jpg deleted file mode 100644 index 2729fbe6133df67d4fab7438244182ba836f5908..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image118.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image119.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image119.jpg deleted file mode 100644 index 44fd8b841548900ab6ddc598b2ad0124d8341864..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image119.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image120.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image120.jpg deleted file mode 100644 index 19d88f70e7675b110b582164e43ad4f60419c7be..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image120.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image121.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image121.jpg deleted file mode 100644 index 4f997ac892b2ce7d1daecc358afabcccec0d04be..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image121.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image122.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image122.jpg deleted file mode 100644 index 9a16de514b7ec68c579cc02ec680df3db1292746..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image122.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image123.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image123.jpg deleted file mode 100644 index f5f4a82d076f0b22b2e85d139c2ac5b6572bb571..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image123.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image124.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image124.jpg deleted file mode 100644 index e835a4b22a7032069e7c2edb6eda1b012a15671f..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image124.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image125.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image125.jpg deleted file mode 100644 index 3b779d9ba44f054c3673a761417035a86f97db06..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image125.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image126.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image126.jpg deleted file mode 100644 index 93f72bd16af2289bdfdd24120acadb3506f37ab5..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image126.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image127.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image127.jpg deleted file mode 100644 index 09787b77310e4dabb74f48d154168c67204f2243..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image127.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image128.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image128.jpg deleted file mode 100644 index 074ab2fc864e572f1a4c59511316466ec72927e0..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image128.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image129.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image129.jpg deleted file mode 100644 index c144eb5cfdf77caa18567a909b27c55880960a08..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image129.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image130.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image130.jpg deleted file mode 100644 index a88fdddd286bd9b2eb6773734227471f5f2dd655..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image130.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image131.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image131.jpg deleted file mode 100644 index c51c4fdf2370cb292a811dce8b467ded6182d86c..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image131.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image132.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image132.jpg deleted file mode 100644 index 085df85bf7f3f65fb1e21e135cac3ccb18cbe3e0..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image132.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image133.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image133.jpg deleted file mode 100644 index dbd04ac802204eb9327552431a1f5b3fe213f10b..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image133.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image134.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image134.jpg deleted file mode 100644 index 2de14584f45993655ef626b7a76e0b22cab2eeb8..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image134.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image135.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image135.jpg deleted file mode 100644 index aaa7212f107ca3d81470328b37f25e4ba36cf199..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image135.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image136.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image136.jpg deleted file mode 100644 index 3aa85416624254075ae967ff13178df1922ed1b4..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image136.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image137.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image137.jpg deleted file mode 100644 index 5db1e281048f80b4e82cb7d375c4fa08729f4ea7..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image137.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image138.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image138.jpg deleted file mode 100644 index 4884213affb1aee8b54131690cf5042293803eb2..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image138.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image139.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image139.jpg deleted file mode 100644 index 0173dc81f0ba01639ef60397d45185323fe84440..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image139.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image140.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image140.jpg deleted file mode 100644 index 900204ac42a37068c1292b4779a08336e84a88ff..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image140.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image141.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image141.jpg deleted file mode 100644 index 49d46b7bbb3ff297f7975c5f96ccfa42b0c3fdc1..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image141.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image142.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image142.jpg deleted file mode 100644 index 12a49c84869560400bc3859aef3b80cea3bd7722..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image142.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/images/clip_image143.jpg b/docs/devtoolkit/docs/source_zh_cn/images/clip_image143.jpg deleted file mode 100644 index 7a2e5f268171c29cfc84c6f9748f5ebe3a7ef399..0000000000000000000000000000000000000000 Binary files a/docs/devtoolkit/docs/source_zh_cn/images/clip_image143.jpg and /dev/null differ diff --git a/docs/devtoolkit/docs/source_zh_cn/index.rst b/docs/devtoolkit/docs/source_zh_cn/index.rst deleted file mode 100644 index 55cf86cc6f52ce0661e93ad1c642bc37e009e81a..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_zh_cn/index.rst +++ /dev/null @@ -1,61 +0,0 @@ -MindSpore Dev Toolkit文档 -============================ - -MindSpore Dev Toolkit是一款支持MindSpore开发的 `PyCharm `_ (多平台Python IDE)插件,提供 `创建项目 `_ 、 `智能补全 `_ 、 `API互搜 `_ 和 `文档搜索 `_ 等功能。 - -MindSpore Dev Toolkit通过深度学习、智能搜索及智能推荐等技术,打造智能计算最佳体验,致力于全面提升MindSpore框架的易用性,助力MindSpore生态推广。 - -代码仓地址: - -系统需求 ------------------------------- - -- 插件支持的操作系统: - - - Windows 10 - - - Linux - - - MacOS(仅支持x86架构,补全功能暂未上线) - -- 插件支持的PyCharm版本: - - - 2020.3 - - - 2021.X - - - 2022.x - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: PyCharm插件使用指南 - :hidden: - - PyCharm_plugin_install - compiling - smart_completion - PyCharm_change_version - api_search - api_scanning - knowledge_search - mindspore_project_wizard - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: VSCode插件使用指南 - :hidden: - - VSCode_plugin_install - VSCode_smart_completion - VSCode_change_version - VSCode_api_search - VSCode_api_scan - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: RELEASE NOTES - - RELEASE \ No newline at end of file diff --git a/docs/devtoolkit/docs/source_zh_cn/knowledge_search.md b/docs/devtoolkit/docs/source_zh_cn/knowledge_search.md deleted file mode 100644 index 6ba1068fafc6861143763f2a30a347bad4ed3ced..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_zh_cn/knowledge_search.md +++ /dev/null @@ -1,22 +0,0 @@ -# 智能知识搜索 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/knowledge_search.md) - -## 功能介绍 - -* 定向推荐:根据用户使用习惯,提供更精准的搜索结果。 -* 沉浸式资料检索体验,避免在IDE和浏览器之间的互相切换。适配侧边栏,提供窄屏适配界面。 - -## 使用步骤 - -1. 打开侧边栏,显示搜索主页。 - - ![img](images/clip_image072.jpg) - -2. 输入"api映射",点击搜索,查看结果。 - - ![img](images/clip_image074.jpg) - -3. 点击home按钮回到主页。 - - ![img](images/clip_image076.jpg) \ No newline at end of file diff --git a/docs/devtoolkit/docs/source_zh_cn/mindspore_project_wizard.md b/docs/devtoolkit/docs/source_zh_cn/mindspore_project_wizard.md deleted file mode 100644 index ca5972171d1a96375155f6aef3b094cbf73ead13..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_zh_cn/mindspore_project_wizard.md +++ /dev/null @@ -1,103 +0,0 @@ -# 创建项目 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/mindspore_project_wizard.md) - -## 技术背景 - -本功能的实现基于[conda](https://conda.io)。Conda是一个包管理和环境管理系统,是MindSpore推荐的安装方式之一。 - -## 功能介绍 - -* 创建conda环境或选择已有conda环境,并安装MindSpore二进制包至conda环境。 -* 部署最佳实践模版。不仅可以测试环境是否安装成功,对新用户也提供了一个MindSpore的入门介绍。 -* 在网络状况良好时,10分钟之内即可完成环境安装,开始体验MindSpore。最大可节约新用户80%的环境配置时间。 - -## 使用步骤 - -1. 选择**File** > **New Project**。 - - ![img](images/clip_image002.jpg) - -2. 选择**MindSpore**。 - - ![img](images/clip_image004.jpg) - -3. Miniconda下载安装。***已经安装过conda的可以跳过此步骤。*** - - 3.1 点击Install Miniconda Automatically按钮。 - - ![img](images/clip_image006.jpg) - - 3.2 选择下载安装文件夹。**建议不修改路径,使用默认路径安装conda。** - - ![img](images/clip_image008.jpg) - - 3.3 点击**Install**按钮,等待下载安装。 - - ![img](images/clip_image010.jpg) - - ![img](images/clip_image012.jpg) - - 3.4 Miniconda下载安装完成。 - - ![img](images/clip_image014.jpg) - - 3.5 根据提示重新启动PyCharm或者稍后自行重新启动PyCharm。***注意:接下来的步骤必须重启PyCharm后方可继续*** - - ![img](images/clip_image015.jpg) - -4. 确认Conda executable路径已正确填充。 如果Conda executable没有自动填充,点击文件夹按钮,选择本地已安装的conda的路径。 - - ![img](images/clip_image016.jpg) - -5. 创建或选择已有的conda环境。 - - * 创建新的conda环境。 **建议不修改路径,使用默认路径创建conda环境。由于PyCharm限制,Linux系统下暂时无法选择默认目录以外的地址。** - - ![img](images/clip_image018.jpg) - - * 选择PyCharm中已有的conda环境。 - - ![img](images/clip_image019.jpg) - -6. 选择硬件环境和MindSpore项目最佳实践模板。 - - 6.1 选择硬件环境。 - - ![img](images/clip_image020.jpg) - - 6.2 选择最佳实践模板。最佳实践模版是MindSpore提供一些样例项目,以供新用户熟悉MindSpore。最佳实践模版可以直接运行。 - - ![img](images/clip_image021.jpg) - -7. 点击**Create**按钮新建项目,等待MindSpore下载安装成功。 - - 7.1 点击**Create**按钮创建MindSpore新项目。 - - ![img](images/clip_image022.jpg) - - 7.2 正在创建创建conda环境。 - - ![img](images/clip_image023.jpg) - - 7.3 正在通过conda配置MindSpore。 - - ![img](images/clip_image024.jpg) - -8. 创建MindSpore项目完成。 - - ![img](images/clip_image025.jpg) - -9. 验证MindSpore项目是否创建成功。 - - * 点击下方Terminal,输入 python -c "import mindspore;mindspore.run_check()" ,查看输出。如下图,显示了版本号等,表示MindSpore环境可用。 - - ![img](images/clip_image026.jpg) - - * 如果选择了最佳实践模版,可以通过运行最佳实践,测试MindSpore环境。 - - ![img](images/clip_image027.jpg) - - ![img](images/clip_image028.jpg) - - ![img](images/clip_image029.jpg) \ No newline at end of file diff --git a/docs/devtoolkit/docs/source_zh_cn/smart_completion.md b/docs/devtoolkit/docs/source_zh_cn/smart_completion.md deleted file mode 100644 index 5e4cf4edd5272715f7af955a697341e51f465eb5..0000000000000000000000000000000000000000 --- a/docs/devtoolkit/docs/source_zh_cn/smart_completion.md +++ /dev/null @@ -1,36 +0,0 @@ -# 代码补全 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/devtoolkit/docs/source_zh_cn/smart_completion.md) - -## 功能介绍 - -* 提供基于MindSpore项目的AI代码补全。 -* 无需安装MindSpore环境,也可轻松开发MindSpore。 - -## 使用步骤 - -1. 打开Python文件编写代码。 - - ![img](images/clip_image088.jpg) - -2. 编码时,补全会自动生效。有MindSpore图标的条目为MindSpore Dev Toolkit智能补全提供的代码。 - - ![img](images/clip_image090.jpg) - - ![img](images/clip_image092.jpg) - -## 备注 - -1. PyCharm的2021以后版本,会根据机器学习重新排列补全内容。此行为可能导致插件的补全条目排序靠后。可以在设置中停用此功能,使用MindSpore Dev Toolkit提供的排序。 - - ![img](images/clip_image093.jpg) - -2. 关闭此选项前后的对比。 - - * 关闭后。 - - ![img](images/clip_image094.jpg) - - * 关闭前。 - - ![img](images/clip_image096.jpg) \ No newline at end of file diff --git a/docs/hub/docs/Makefile b/docs/hub/docs/Makefile deleted file mode 100644 index 1eff8952707bdfa503c8d60c1e9a903053170ba2..0000000000000000000000000000000000000000 --- a/docs/hub/docs/Makefile +++ /dev/null @@ -1,20 +0,0 @@ -# Minimal makefile for Sphinx documentation -# - -# You can set these variables from the command line, and also -# from the environment for the first two. -SPHINXOPTS ?= -SPHINXBUILD ?= sphinx-build -SOURCEDIR = source_zh_cn -BUILDDIR = build_zh_cn - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/docs/hub/docs/_ext/overwriteautosummary_generate.txt b/docs/hub/docs/_ext/overwriteautosummary_generate.txt deleted file mode 100644 index 4b0a1b1dd2b410ecab971b13da9993c90d65ef0d..0000000000000000000000000000000000000000 --- a/docs/hub/docs/_ext/overwriteautosummary_generate.txt +++ /dev/null @@ -1,707 +0,0 @@ -""" - sphinx.ext.autosummary.generate - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Usable as a library or script to generate automatic RST source files for - items referred to in autosummary:: directives. - - Each generated RST file contains a single auto*:: directive which - extracts the docstring of the referred item. - - Example Makefile rule:: - - generate: - sphinx-autogen -o source/generated source/*.rst - - :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import argparse -import importlib -import inspect -import locale -import os -import pkgutil -import pydoc -import re -import sys -import warnings -from gettext import NullTranslations -from os import path -from typing import Any, Dict, List, NamedTuple, Sequence, Set, Tuple, Type, Union - -from jinja2 import TemplateNotFound -from jinja2.sandbox import SandboxedEnvironment - -import sphinx.locale -from sphinx import __display_version__, package_dir -from sphinx.application import Sphinx -from sphinx.builders import Builder -from sphinx.config import Config -from sphinx.deprecation import RemovedInSphinx50Warning -from sphinx.ext.autodoc import Documenter -from sphinx.ext.autodoc.importer import import_module -from sphinx.ext.autosummary import (ImportExceptionGroup, get_documenter, import_by_name, - import_ivar_by_name) -from sphinx.locale import __ -from sphinx.pycode import ModuleAnalyzer, PycodeError -from sphinx.registry import SphinxComponentRegistry -from sphinx.util import logging, rst, split_full_qualified_name, get_full_modname -from sphinx.util.inspect import getall, safe_getattr -from sphinx.util.osutil import ensuredir -from sphinx.util.template import SphinxTemplateLoader - -logger = logging.getLogger(__name__) - - -class DummyApplication: - """Dummy Application class for sphinx-autogen command.""" - - def __init__(self, translator: NullTranslations) -> None: - self.config = Config() - self.registry = SphinxComponentRegistry() - self.messagelog: List[str] = [] - self.srcdir = "/" - self.translator = translator - self.verbosity = 0 - self._warncount = 0 - self.warningiserror = False - - self.config.add('autosummary_context', {}, True, None) - self.config.add('autosummary_filename_map', {}, True, None) - self.config.add('autosummary_ignore_module_all', True, 'env', bool) - self.config.add('docs_branch', '', True, None) - self.config.add('branch', '', True, None) - self.config.add('cst_module_name', '', True, None) - self.config.add('copy_repo', '', True, None) - self.config.add('giturl', '', True, None) - self.config.add('repo_whl', '', True, None) - self.config.init_values() - - def emit_firstresult(self, *args: Any) -> None: - pass - - -class AutosummaryEntry(NamedTuple): - name: str - path: str - template: str - recursive: bool - - -def setup_documenters(app: Any) -> None: - from sphinx.ext.autodoc import (AttributeDocumenter, ClassDocumenter, DataDocumenter, - DecoratorDocumenter, ExceptionDocumenter, - FunctionDocumenter, MethodDocumenter, ModuleDocumenter, - NewTypeAttributeDocumenter, NewTypeDataDocumenter, - PropertyDocumenter) - documenters: List[Type[Documenter]] = [ - ModuleDocumenter, ClassDocumenter, ExceptionDocumenter, DataDocumenter, - FunctionDocumenter, MethodDocumenter, NewTypeAttributeDocumenter, - NewTypeDataDocumenter, AttributeDocumenter, DecoratorDocumenter, PropertyDocumenter, - ] - for documenter in documenters: - app.registry.add_documenter(documenter.objtype, documenter) - - -def _simple_info(msg: str) -> None: - warnings.warn('_simple_info() is deprecated.', - RemovedInSphinx50Warning, stacklevel=2) - print(msg) - - -def _simple_warn(msg: str) -> None: - warnings.warn('_simple_warn() is deprecated.', - RemovedInSphinx50Warning, stacklevel=2) - print('WARNING: ' + msg, file=sys.stderr) - - -def _underline(title: str, line: str = '=') -> str: - if '\n' in title: - raise ValueError('Can only underline single lines') - return title + '\n' + line * len(title) - - -class AutosummaryRenderer: - """A helper class for rendering.""" - - def __init__(self, app: Union[Builder, Sphinx], template_dir: str = None) -> None: - if isinstance(app, Builder): - warnings.warn('The first argument for AutosummaryRenderer has been ' - 'changed to Sphinx object', - RemovedInSphinx50Warning, stacklevel=2) - if template_dir: - warnings.warn('template_dir argument for AutosummaryRenderer is deprecated.', - RemovedInSphinx50Warning, stacklevel=2) - - system_templates_path = [os.path.join(package_dir, 'ext', 'autosummary', 'templates')] - loader = SphinxTemplateLoader(app.srcdir, app.config.templates_path, - system_templates_path) - - self.env = SandboxedEnvironment(loader=loader) - self.env.filters['escape'] = rst.escape - self.env.filters['e'] = rst.escape - self.env.filters['underline'] = _underline - - if isinstance(app, (Sphinx, DummyApplication)): - if app.translator: - self.env.add_extension("jinja2.ext.i18n") - self.env.install_gettext_translations(app.translator) - elif isinstance(app, Builder): - if app.app.translator: - self.env.add_extension("jinja2.ext.i18n") - self.env.install_gettext_translations(app.app.translator) - - def exists(self, template_name: str) -> bool: - """Check if template file exists.""" - warnings.warn('AutosummaryRenderer.exists() is deprecated.', - RemovedInSphinx50Warning, stacklevel=2) - try: - self.env.get_template(template_name) - return True - except TemplateNotFound: - return False - - def render(self, template_name: str, context: Dict) -> str: - """Render a template file.""" - try: - template = self.env.get_template(template_name) - except TemplateNotFound: - try: - # objtype is given as template_name - template = self.env.get_template('autosummary/%s.rst' % template_name) - except TemplateNotFound: - # fallback to base.rst - template = self.env.get_template('autosummary/base.rst') - - return template.render(context) - - -# -- Generating output --------------------------------------------------------- - - -class ModuleScanner: - def __init__(self, app: Any, obj: Any) -> None: - self.app = app - self.object = obj - - def get_object_type(self, name: str, value: Any) -> str: - return get_documenter(self.app, value, self.object).objtype - - def is_skipped(self, name: str, value: Any, objtype: str) -> bool: - try: - return self.app.emit_firstresult('autodoc-skip-member', objtype, - name, value, False, {}) - except Exception as exc: - logger.warning(__('autosummary: failed to determine %r to be documented, ' - 'the following exception was raised:\n%s'), - name, exc, type='autosummary') - return False - - def scan(self, imported_members: bool) -> List[str]: - members = [] - for name in members_of(self.object, self.app.config): - try: - value = safe_getattr(self.object, name) - except AttributeError: - value = None - - objtype = self.get_object_type(name, value) - if self.is_skipped(name, value, objtype): - continue - - try: - if inspect.ismodule(value): - imported = True - elif safe_getattr(value, '__module__') != self.object.__name__: - imported = True - else: - imported = False - except AttributeError: - imported = False - - respect_module_all = not self.app.config.autosummary_ignore_module_all - if imported_members: - # list all members up - members.append(name) - elif imported is False: - # list not-imported members - members.append(name) - elif '__all__' in dir(self.object) and respect_module_all: - # list members that have __all__ set - members.append(name) - - return members - - -def members_of(obj: Any, conf: Config) -> Sequence[str]: - """Get the members of ``obj``, possibly ignoring the ``__all__`` module attribute - - Follows the ``conf.autosummary_ignore_module_all`` setting.""" - - if conf.autosummary_ignore_module_all: - return dir(obj) - else: - return getall(obj) or dir(obj) - - -def generate_autosummary_content(name: str, obj: Any, parent: Any, - template: AutosummaryRenderer, template_name: str, - imported_members: bool, app: Any, - recursive: bool, context: Dict, - modname: str = None, qualname: str = None) -> str: - doc = get_documenter(app, obj, parent) - - def skip_member(obj: Any, name: str, objtype: str) -> bool: - try: - return app.emit_firstresult('autodoc-skip-member', objtype, name, - obj, False, {}) - except Exception as exc: - logger.warning(__('autosummary: failed to determine %r to be documented, ' - 'the following exception was raised:\n%s'), - name, exc, type='autosummary') - return False - - def get_class_members(obj: Any) -> Dict[str, Any]: - members = sphinx.ext.autodoc.get_class_members(obj, [qualname], safe_getattr) - return {name: member.object for name, member in members.items()} - - def get_module_members(obj: Any) -> Dict[str, Any]: - members = {} - for name in members_of(obj, app.config): - try: - members[name] = safe_getattr(obj, name) - except AttributeError: - continue - return members - - def get_all_members(obj: Any) -> Dict[str, Any]: - if doc.objtype == "module": - return get_module_members(obj) - elif doc.objtype == "class": - return get_class_members(obj) - return {} - - def get_members(obj: Any, types: Set[str], include_public: List[str] = [], - imported: bool = True) -> Tuple[List[str], List[str]]: - items: List[str] = [] - public: List[str] = [] - - all_members = get_all_members(obj) - for name, value in all_members.items(): - documenter = get_documenter(app, value, obj) - if documenter.objtype in types: - # skip imported members if expected - if imported or getattr(value, '__module__', None) == obj.__name__: - skipped = skip_member(value, name, documenter.objtype) - if skipped is True: - pass - elif skipped is False: - # show the member forcedly - items.append(name) - public.append(name) - else: - items.append(name) - if name in include_public or not name.startswith('_'): - # considers member as public - public.append(name) - return public, items - - def get_module_attrs(members: Any) -> Tuple[List[str], List[str]]: - """Find module attributes with docstrings.""" - attrs, public = [], [] - try: - analyzer = ModuleAnalyzer.for_module(name) - attr_docs = analyzer.find_attr_docs() - for namespace, attr_name in attr_docs: - if namespace == '' and attr_name in members: - attrs.append(attr_name) - if not attr_name.startswith('_'): - public.append(attr_name) - except PycodeError: - pass # give up if ModuleAnalyzer fails to parse code - return public, attrs - - def get_modules(obj: Any) -> Tuple[List[str], List[str]]: - items: List[str] = [] - for _, modname, _ispkg in pkgutil.iter_modules(obj.__path__): - fullname = name + '.' + modname - try: - module = import_module(fullname) - if module and hasattr(module, '__sphinx_mock__'): - continue - except ImportError: - pass - - items.append(fullname) - public = [x for x in items if not x.split('.')[-1].startswith('_')] - return public, items - - ns: Dict[str, Any] = {} - ns.update(context) - - if doc.objtype == 'module': - scanner = ModuleScanner(app, obj) - ns['members'] = scanner.scan(imported_members) - ns['functions'], ns['all_functions'] = \ - get_members(obj, {'function'}, imported=imported_members) - ns['classes'], ns['all_classes'] = \ - get_members(obj, {'class'}, imported=imported_members) - ns['exceptions'], ns['all_exceptions'] = \ - get_members(obj, {'exception'}, imported=imported_members) - ns['attributes'], ns['all_attributes'] = \ - get_module_attrs(ns['members']) - ispackage = hasattr(obj, '__path__') - if ispackage and recursive: - ns['modules'], ns['all_modules'] = get_modules(obj) - elif doc.objtype == 'class': - ns['members'] = dir(obj) - ns['inherited_members'] = \ - set(dir(obj)) - set(obj.__dict__.keys()) - ns['methods'], ns['all_methods'] = \ - get_members(obj, {'method'}, ['__init__']) - ns['attributes'], ns['all_attributes'] = \ - get_members(obj, {'attribute', 'property'}) - - if modname is None or qualname is None: - modname, qualname = split_full_qualified_name(name) - - if doc.objtype in ('method', 'attribute', 'property'): - ns['class'] = qualname.rsplit(".", 1)[0] - - if doc.objtype in ('class',): - shortname = qualname - else: - shortname = qualname.rsplit(".", 1)[-1] - - ns['fullname'] = name - ns['module'] = modname - ns['objname'] = qualname - ns['name'] = shortname - - ns['objtype'] = doc.objtype - ns['underline'] = len(name) * '=' - - if template_name: - return template.render(template_name, ns) - else: - return template.render(doc.objtype, ns) - - -def generate_autosummary_docs(sources: List[str], output_dir: str = None, - suffix: str = '.rst', base_path: str = None, - builder: Builder = None, template_dir: str = None, - imported_members: bool = False, app: Any = None, - overwrite: bool = True, encoding: str = 'utf-8') -> None: - - if builder: - warnings.warn('builder argument for generate_autosummary_docs() is deprecated.', - RemovedInSphinx50Warning, stacklevel=2) - - if template_dir: - warnings.warn('template_dir argument for generate_autosummary_docs() is deprecated.', - RemovedInSphinx50Warning, stacklevel=2) - - showed_sources = list(sorted(sources)) - if len(showed_sources) > 20: - showed_sources = showed_sources[:10] + ['...'] + showed_sources[-10:] - logger.info(__('[autosummary] generating autosummary for: %s') % - ', '.join(showed_sources)) - - if output_dir: - logger.info(__('[autosummary] writing to %s') % output_dir) - - if base_path is not None: - sources = [os.path.join(base_path, filename) for filename in sources] - - template = AutosummaryRenderer(app) - - # read - items = find_autosummary_in_files(sources) - - # keep track of new files - new_files = [] - - if app: - filename_map = app.config.autosummary_filename_map - else: - filename_map = {} - - # write - for entry in sorted(set(items), key=str): - if entry.path is None: - # The corresponding autosummary:: directive did not have - # a :toctree: option - continue - - path = output_dir or os.path.abspath(entry.path) - ensuredir(path) - - try: - name, obj, parent, modname = import_by_name(entry.name, grouped_exception=True) - qualname = name.replace(modname + ".", "") - except ImportExceptionGroup as exc: - try: - # try to import as an instance attribute - name, obj, parent, modname = import_ivar_by_name(entry.name) - qualname = name.replace(modname + ".", "") - except ImportError as exc2: - if exc2.__cause__: - exceptions: List[BaseException] = exc.exceptions + [exc2.__cause__] - else: - exceptions = exc.exceptions + [exc2] - - errors = list(set("* %s: %s" % (type(e).__name__, e) for e in exceptions)) - logger.warning(__('[autosummary] failed to import %s.\nPossible hints:\n%s'), - entry.name, '\n'.join(errors)) - continue - - context: Dict[str, Any] = {} - if app: - context.update(app.config.autosummary_context) - - content = generate_autosummary_content(name, obj, parent, template, entry.template, - imported_members, app, entry.recursive, context, - modname, qualname) - try: - py_source_rel = get_full_modname(modname, qualname).replace('.', '/') + '.py' - except: - logger.warning(name) - py_source_rel = '' - - re_view = f"\n.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/{app.config.docs_branch}/" + \ - f"resource/_static/logo_source_en.svg\n :target: " + app.config.giturl + \ - f"{app.config.copy_repo}/blob/{app.config.branch}/" + app.config.repo_whl + \ - py_source_rel.split(app.config.cst_module_name)[-1] + '\n :alt: View Source On Gitee\n\n' - - if re_view not in content and py_source_rel: - content = re.sub('([=]{5,})\n', r'\1\n' + re_view, content, 1) - filename = os.path.join(path, filename_map.get(name, name) + suffix) - if os.path.isfile(filename): - with open(filename, encoding=encoding) as f: - old_content = f.read() - - if content == old_content: - continue - elif overwrite: # content has changed - with open(filename, 'w', encoding=encoding) as f: - f.write(content) - new_files.append(filename) - else: - with open(filename, 'w', encoding=encoding) as f: - f.write(content) - new_files.append(filename) - - # descend recursively to new files - if new_files: - generate_autosummary_docs(new_files, output_dir=output_dir, - suffix=suffix, base_path=base_path, - builder=builder, template_dir=template_dir, - imported_members=imported_members, app=app, - overwrite=overwrite) - - -# -- Finding documented entries in files --------------------------------------- - -def find_autosummary_in_files(filenames: List[str]) -> List[AutosummaryEntry]: - """Find out what items are documented in source/*.rst. - - See `find_autosummary_in_lines`. - """ - documented: List[AutosummaryEntry] = [] - for filename in filenames: - with open(filename, encoding='utf-8', errors='ignore') as f: - lines = f.read().splitlines() - documented.extend(find_autosummary_in_lines(lines, filename=filename)) - return documented - - -def find_autosummary_in_docstring(name: str, module: str = None, filename: str = None - ) -> List[AutosummaryEntry]: - """Find out what items are documented in the given object's docstring. - - See `find_autosummary_in_lines`. - """ - if module: - warnings.warn('module argument for find_autosummary_in_docstring() is deprecated.', - RemovedInSphinx50Warning, stacklevel=2) - - try: - real_name, obj, parent, modname = import_by_name(name, grouped_exception=True) - lines = pydoc.getdoc(obj).splitlines() - return find_autosummary_in_lines(lines, module=name, filename=filename) - except AttributeError: - pass - except ImportExceptionGroup as exc: - errors = list(set("* %s: %s" % (type(e).__name__, e) for e in exc.exceptions)) - print('Failed to import %s.\nPossible hints:\n%s' % (name, '\n'.join(errors))) - except SystemExit: - print("Failed to import '%s'; the module executes module level " - "statement and it might call sys.exit()." % name) - return [] - - -def find_autosummary_in_lines(lines: List[str], module: str = None, filename: str = None - ) -> List[AutosummaryEntry]: - """Find out what items appear in autosummary:: directives in the - given lines. - - Returns a list of (name, toctree, template) where *name* is a name - of an object and *toctree* the :toctree: path of the corresponding - autosummary directive (relative to the root of the file name), and - *template* the value of the :template: option. *toctree* and - *template* ``None`` if the directive does not have the - corresponding options set. - """ - autosummary_re = re.compile(r'^(\s*)\.\.\s+(ms[a-z]*)?autosummary::\s*') - automodule_re = re.compile( - r'^\s*\.\.\s+automodule::\s*([A-Za-z0-9_.]+)\s*$') - module_re = re.compile( - r'^\s*\.\.\s+(current)?module::\s*([a-zA-Z0-9_.]+)\s*$') - autosummary_item_re = re.compile(r'^\s+(~?[_a-zA-Z][a-zA-Z0-9_.]*)\s*.*?') - recursive_arg_re = re.compile(r'^\s+:recursive:\s*$') - toctree_arg_re = re.compile(r'^\s+:toctree:\s*(.*?)\s*$') - template_arg_re = re.compile(r'^\s+:template:\s*(.*?)\s*$') - - documented: List[AutosummaryEntry] = [] - - recursive = False - toctree: str = None - template = None - current_module = module - in_autosummary = False - base_indent = "" - - for line in lines: - if in_autosummary: - m = recursive_arg_re.match(line) - if m: - recursive = True - continue - - m = toctree_arg_re.match(line) - if m: - toctree = m.group(1) - if filename: - toctree = os.path.join(os.path.dirname(filename), - toctree) - continue - - m = template_arg_re.match(line) - if m: - template = m.group(1).strip() - continue - - if line.strip().startswith(':'): - continue # skip options - - m = autosummary_item_re.match(line) - if m: - name = m.group(1).strip() - if name.startswith('~'): - name = name[1:] - if current_module and \ - not name.startswith(current_module + '.'): - name = "%s.%s" % (current_module, name) - documented.append(AutosummaryEntry(name, toctree, template, recursive)) - continue - - if not line.strip() or line.startswith(base_indent + " "): - continue - - in_autosummary = False - - m = autosummary_re.match(line) - if m: - in_autosummary = True - base_indent = m.group(1) - recursive = False - toctree = None - template = None - continue - - m = automodule_re.search(line) - if m: - current_module = m.group(1).strip() - # recurse into the automodule docstring - documented.extend(find_autosummary_in_docstring( - current_module, filename=filename)) - continue - - m = module_re.match(line) - if m: - current_module = m.group(2) - continue - - return documented - - -def get_parser() -> argparse.ArgumentParser: - parser = argparse.ArgumentParser( - usage='%(prog)s [OPTIONS] ...', - epilog=__('For more information, visit .'), - description=__(""" -Generate ReStructuredText using autosummary directives. - -sphinx-autogen is a frontend to sphinx.ext.autosummary.generate. It generates -the reStructuredText files from the autosummary directives contained in the -given input files. - -The format of the autosummary directive is documented in the -``sphinx.ext.autosummary`` Python module and can be read using:: - - pydoc sphinx.ext.autosummary -""")) - - parser.add_argument('--version', action='version', dest='show_version', - version='%%(prog)s %s' % __display_version__) - - parser.add_argument('source_file', nargs='+', - help=__('source files to generate rST files for')) - - parser.add_argument('-o', '--output-dir', action='store', - dest='output_dir', - help=__('directory to place all output in')) - parser.add_argument('-s', '--suffix', action='store', dest='suffix', - default='rst', - help=__('default suffix for files (default: ' - '%(default)s)')) - parser.add_argument('-t', '--templates', action='store', dest='templates', - default=None, - help=__('custom template directory (default: ' - '%(default)s)')) - parser.add_argument('-i', '--imported-members', action='store_true', - dest='imported_members', default=False, - help=__('document imported members (default: ' - '%(default)s)')) - parser.add_argument('-a', '--respect-module-all', action='store_true', - dest='respect_module_all', default=False, - help=__('document exactly the members in module __all__ attribute. ' - '(default: %(default)s)')) - - return parser - - -def main(argv: List[str] = sys.argv[1:]) -> None: - sphinx.locale.setlocale(locale.LC_ALL, '') - sphinx.locale.init_console(os.path.join(package_dir, 'locale'), 'sphinx') - translator, _ = sphinx.locale.init([], None) - - app = DummyApplication(translator) - logging.setup(app, sys.stdout, sys.stderr) # type: ignore - setup_documenters(app) - args = get_parser().parse_args(argv) - - if args.templates: - app.config.templates_path.append(path.abspath(args.templates)) - app.config.autosummary_ignore_module_all = not args.respect_module_all # type: ignore - - generate_autosummary_docs(args.source_file, args.output_dir, - '.' + args.suffix, - imported_members=args.imported_members, - app=app) - - -if __name__ == '__main__': - main() diff --git a/docs/hub/docs/_ext/overwriteobjectiondirective.txt b/docs/hub/docs/_ext/overwriteobjectiondirective.txt deleted file mode 100644 index e7ffdfe09a737771ead4a9c2ce1d0b945bb49947..0000000000000000000000000000000000000000 --- a/docs/hub/docs/_ext/overwriteobjectiondirective.txt +++ /dev/null @@ -1,374 +0,0 @@ -""" - sphinx.directives - ~~~~~~~~~~~~~~~~~ - - Handlers for additional ReST directives. - - :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re -import inspect -import importlib -from functools import reduce -from typing import TYPE_CHECKING, Any, Dict, Generic, List, Tuple, TypeVar, cast - -from docutils import nodes -from docutils.nodes import Node -from docutils.parsers.rst import directives, roles - -from sphinx import addnodes -from sphinx.addnodes import desc_signature -from sphinx.deprecation import RemovedInSphinx50Warning, deprecated_alias -from sphinx.util import docutils, logging -from sphinx.util.docfields import DocFieldTransformer, Field, TypedField -from sphinx.util.docutils import SphinxDirective -from sphinx.util.typing import OptionSpec - -if TYPE_CHECKING: - from sphinx.application import Sphinx - - -# RE to strip backslash escapes -nl_escape_re = re.compile(r'\\\n') -strip_backslash_re = re.compile(r'\\(.)') - -T = TypeVar('T') -logger = logging.getLogger(__name__) - -def optional_int(argument: str) -> int: - """ - Check for an integer argument or None value; raise ``ValueError`` if not. - """ - if argument is None: - return None - else: - value = int(argument) - if value < 0: - raise ValueError('negative value; must be positive or zero') - return value - -def get_api(fullname): - """ - 获取接口对象。 - - :param fullname: 接口名全称 - :return: 属性对象或None(如果不存在) - """ - main_module = fullname.split('.')[0] - main_import = importlib.import_module(main_module) - - try: - return reduce(getattr, fullname.split('.')[1:], main_import) - except AttributeError: - return None - -def get_example(name: str): - try: - api_doc = inspect.getdoc(get_api(name)) - example_str = re.findall(r'Examples:\n([\w\W]*?)(\n\n|$)', api_doc) - if not example_str: - return [] - example_str = re.sub(r'\n\s+', r'\n', example_str[0][0]) - example_str = example_str.strip() - example_list = example_str.split('\n') - return ["", "**样例:**", ""] + example_list + [""] - except: - return [] - -def get_platforms(name: str): - try: - api_doc = inspect.getdoc(get_api(name)) - example_str = re.findall(r'Supported Platforms:\n\s+(.*?)\n\n', api_doc) - if not example_str: - example_str_leak = re.findall(r'Supported Platforms:\n\s+(.*)', api_doc) - if example_str_leak: - example_str = example_str_leak[0].strip() - example_list = example_str.split('\n') - example_list = [' ' + example_list[0]] - return ["", "支持平台:"] + example_list + [""] - return [] - example_str = example_str[0].strip() - example_list = example_str.split('\n') - example_list = [' ' + example_list[0]] - return ["", "支持平台:"] + example_list + [""] - except: - return [] - -class ObjectDescription(SphinxDirective, Generic[T]): - """ - Directive to describe a class, function or similar object. Not used - directly, but subclassed (in domain-specific directives) to add custom - behavior. - """ - - has_content = True - required_arguments = 1 - optional_arguments = 0 - final_argument_whitespace = True - option_spec: OptionSpec = { - 'noindex': directives.flag, - } # type: Dict[str, DirectiveOption] - - # types of doc fields that this directive handles, see sphinx.util.docfields - doc_field_types: List[Field] = [] - domain: str = None - objtype: str = None - indexnode: addnodes.index = None - - # Warning: this might be removed in future version. Don't touch this from extensions. - _doc_field_type_map = {} # type: Dict[str, Tuple[Field, bool]] - - def get_field_type_map(self) -> Dict[str, Tuple[Field, bool]]: - if self._doc_field_type_map == {}: - self._doc_field_type_map = {} - for field in self.doc_field_types: - for name in field.names: - self._doc_field_type_map[name] = (field, False) - - if field.is_typed: - typed_field = cast(TypedField, field) - for name in typed_field.typenames: - self._doc_field_type_map[name] = (field, True) - - return self._doc_field_type_map - - def get_signatures(self) -> List[str]: - """ - Retrieve the signatures to document from the directive arguments. By - default, signatures are given as arguments, one per line. - - Backslash-escaping of newlines is supported. - """ - lines = nl_escape_re.sub('', self.arguments[0]).split('\n') - if self.config.strip_signature_backslash: - # remove backslashes to support (dummy) escapes; helps Vim highlighting - return [strip_backslash_re.sub(r'\1', line.strip()) for line in lines] - else: - return [line.strip() for line in lines] - - def handle_signature(self, sig: str, signode: desc_signature) -> Any: - """ - Parse the signature *sig* into individual nodes and append them to - *signode*. If ValueError is raised, parsing is aborted and the whole - *sig* is put into a single desc_name node. - - The return value should be a value that identifies the object. It is - passed to :meth:`add_target_and_index()` unchanged, and otherwise only - used to skip duplicates. - """ - raise ValueError - - def add_target_and_index(self, name: Any, sig: str, signode: desc_signature) -> None: - """ - Add cross-reference IDs and entries to self.indexnode, if applicable. - - *name* is whatever :meth:`handle_signature()` returned. - """ - return # do nothing by default - - def before_content(self) -> None: - """ - Called before parsing content. Used to set information about the current - directive context on the build environment. - """ - pass - - def transform_content(self, contentnode: addnodes.desc_content) -> None: - """ - Called after creating the content through nested parsing, - but before the ``object-description-transform`` event is emitted, - and before the info-fields are transformed. - Can be used to manipulate the content. - """ - pass - - def after_content(self) -> None: - """ - Called after parsing content. Used to reset information about the - current directive context on the build environment. - """ - pass - - def check_class_end(self, content): - for i in content: - if not i.startswith('.. include::') and i != "\n" and i != "": - return False - return True - - def extend_items(self, rst_file, start_num, num): - ls = [] - for i in range(1, num+1): - ls.append((rst_file, start_num+i)) - return ls - - def run(self) -> List[Node]: - """ - Main directive entry function, called by docutils upon encountering the - directive. - - This directive is meant to be quite easily subclassable, so it delegates - to several additional methods. What it does: - - * find out if called as a domain-specific directive, set self.domain - * create a `desc` node to fit all description inside - * parse standard options, currently `noindex` - * create an index node if needed as self.indexnode - * parse all given signatures (as returned by self.get_signatures()) - using self.handle_signature(), which should either return a name - or raise ValueError - * add index entries using self.add_target_and_index() - * parse the content and handle doc fields in it - """ - if ':' in self.name: - self.domain, self.objtype = self.name.split(':', 1) - else: - self.domain, self.objtype = '', self.name - self.indexnode = addnodes.index(entries=[]) - - node = addnodes.desc() - node.document = self.state.document - node['domain'] = self.domain - # 'desctype' is a backwards compatible attribute - node['objtype'] = node['desctype'] = self.objtype - node['noindex'] = noindex = ('noindex' in self.options) - if self.domain: - node['classes'].append(self.domain) - node['classes'].append(node['objtype']) - - self.names: List[T] = [] - signatures = self.get_signatures() - for sig in signatures: - # add a signature node for each signature in the current unit - # and add a reference target for it - signode = addnodes.desc_signature(sig, '') - self.set_source_info(signode) - node.append(signode) - try: - # name can also be a tuple, e.g. (classname, objname); - # this is strictly domain-specific (i.e. no assumptions may - # be made in this base class) - name = self.handle_signature(sig, signode) - except ValueError: - # signature parsing failed - signode.clear() - signode += addnodes.desc_name(sig, sig) - continue # we don't want an index entry here - if name not in self.names: - self.names.append(name) - if not noindex: - # only add target and index entry if this is the first - # description of the object with this name in this desc block - self.add_target_and_index(name, sig, signode) - - contentnode = addnodes.desc_content() - node.append(contentnode) - if self.names: - # needed for association of version{added,changed} directives - self.env.temp_data['object'] = self.names[0] - self.before_content() - try: - example = get_example(self.names[0][0]) - platforms = get_platforms(self.names[0][0]) - except Exception as e: - example = '' - platforms = '' - logger.warning(f'Error API names in {self.arguments[0]}.') - logger.warning(f'{e}') - extra = platforms + example - if extra: - if self.objtype == "method": - self.content.data.extend(extra) - else: - index_num = 0 - for num, i in enumerate(self.content.data): - if i.startswith('.. py:method::') or self.check_class_end(self.content.data[num:]): - index_num = num - break - if index_num: - count = len(self.content.data) - for i in extra: - self.content.data.insert(index_num-count, i) - else: - self.content.data.extend(extra) - try: - self.content.items.extend(self.extend_items(self.content.items[0][0], self.content.items[-1][1], len(extra))) - except Exception as e: - logger.warning(f'{e}') - self.state.nested_parse(self.content, self.content_offset, contentnode) - self.transform_content(contentnode) - self.env.app.emit('object-description-transform', - self.domain, self.objtype, contentnode) - DocFieldTransformer(self).transform_all(contentnode) - self.env.temp_data['object'] = None - self.after_content() - return [self.indexnode, node] - - -class DefaultRole(SphinxDirective): - """ - Set the default interpreted text role. Overridden from docutils. - """ - - optional_arguments = 1 - final_argument_whitespace = False - - def run(self) -> List[Node]: - if not self.arguments: - docutils.unregister_role('') - return [] - role_name = self.arguments[0] - role, messages = roles.role(role_name, self.state_machine.language, - self.lineno, self.state.reporter) - if role: - docutils.register_role('', role) - self.env.temp_data['default_role'] = role_name - else: - literal_block = nodes.literal_block(self.block_text, self.block_text) - reporter = self.state.reporter - error = reporter.error('Unknown interpreted text role "%s".' % role_name, - literal_block, line=self.lineno) - messages += [error] - - return cast(List[nodes.Node], messages) - - -class DefaultDomain(SphinxDirective): - """ - Directive to (re-)set the default domain for this source file. - """ - - has_content = False - required_arguments = 1 - optional_arguments = 0 - final_argument_whitespace = False - option_spec = {} # type: Dict - - def run(self) -> List[Node]: - domain_name = self.arguments[0].lower() - # if domain_name not in env.domains: - # # try searching by label - # for domain in env.domains.values(): - # if domain.label.lower() == domain_name: - # domain_name = domain.name - # break - self.env.temp_data['default_domain'] = self.env.domains.get(domain_name) - return [] - -def setup(app: "Sphinx") -> Dict[str, Any]: - app.add_config_value("strip_signature_backslash", False, 'env') - directives.register_directive('default-role', DefaultRole) - directives.register_directive('default-domain', DefaultDomain) - directives.register_directive('describe', ObjectDescription) - # new, more consistent, name - directives.register_directive('object', ObjectDescription) - - app.add_event('object-description-transform') - - return { - 'version': 'builtin', - 'parallel_read_safe': True, - 'parallel_write_safe': True, - } - diff --git a/docs/hub/docs/_ext/overwriteviewcode.txt b/docs/hub/docs/_ext/overwriteviewcode.txt deleted file mode 100644 index 172780ec56b3ed90e7b0add617257a618cf38ee0..0000000000000000000000000000000000000000 --- a/docs/hub/docs/_ext/overwriteviewcode.txt +++ /dev/null @@ -1,378 +0,0 @@ -""" - sphinx.ext.viewcode - ~~~~~~~~~~~~~~~~~~~ - - Add links to module code in Python object descriptions. - - :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import posixpath -import traceback -import warnings -from os import path -from typing import Any, Dict, Generator, Iterable, Optional, Set, Tuple, cast - -from docutils import nodes -from docutils.nodes import Element, Node - -import sphinx -from sphinx import addnodes -from sphinx.application import Sphinx -from sphinx.builders import Builder -from sphinx.builders.html import StandaloneHTMLBuilder -from sphinx.deprecation import RemovedInSphinx50Warning -from sphinx.environment import BuildEnvironment -from sphinx.locale import _, __ -from sphinx.pycode import ModuleAnalyzer -from sphinx.transforms.post_transforms import SphinxPostTransform -from sphinx.util import get_full_modname, logging, status_iterator -from sphinx.util.nodes import make_refnode - - -logger = logging.getLogger(__name__) - - -OUTPUT_DIRNAME = '_modules' - - -class viewcode_anchor(Element): - """Node for viewcode anchors. - - This node will be processed in the resolving phase. - For viewcode supported builders, they will be all converted to the anchors. - For not supported builders, they will be removed. - """ - - -def _get_full_modname(app: Sphinx, modname: str, attribute: str) -> Optional[str]: - try: - return get_full_modname(modname, attribute) - except AttributeError: - # sphinx.ext.viewcode can't follow class instance attribute - # then AttributeError logging output only verbose mode. - logger.verbose('Didn\'t find %s in %s', attribute, modname) - return None - except Exception as e: - # sphinx.ext.viewcode follow python domain directives. - # because of that, if there are no real modules exists that specified - # by py:function or other directives, viewcode emits a lot of warnings. - # It should be displayed only verbose mode. - logger.verbose(traceback.format_exc().rstrip()) - logger.verbose('viewcode can\'t import %s, failed with error "%s"', modname, e) - return None - - -def is_supported_builder(builder: Builder) -> bool: - if builder.format != 'html': - return False - elif builder.name == 'singlehtml': - return False - elif builder.name.startswith('epub') and not builder.config.viewcode_enable_epub: - return False - else: - return True - - -def doctree_read(app: Sphinx, doctree: Node) -> None: - env = app.builder.env - if not hasattr(env, '_viewcode_modules'): - env._viewcode_modules = {} # type: ignore - - def has_tag(modname: str, fullname: str, docname: str, refname: str) -> bool: - entry = env._viewcode_modules.get(modname, None) # type: ignore - if entry is False: - return False - - code_tags = app.emit_firstresult('viewcode-find-source', modname) - if code_tags is None: - try: - analyzer = ModuleAnalyzer.for_module(modname) - analyzer.find_tags() - except Exception: - env._viewcode_modules[modname] = False # type: ignore - return False - - code = analyzer.code - tags = analyzer.tags - else: - code, tags = code_tags - - if entry is None or entry[0] != code: - entry = code, tags, {}, refname - env._viewcode_modules[modname] = entry # type: ignore - _, tags, used, _ = entry - if fullname in tags: - used[fullname] = docname - return True - - return False - - for objnode in list(doctree.findall(addnodes.desc)): - if objnode.get('domain') != 'py': - continue - names: Set[str] = set() - for signode in objnode: - if not isinstance(signode, addnodes.desc_signature): - continue - modname = signode.get('module') - fullname = signode.get('fullname') - try: - if fullname and modname==None: - if fullname.split('.')[-1].lower() == fullname.split('.')[-1] and fullname.split('.')[-2].lower() != fullname.split('.')[-2]: - modname = '.'.join(fullname.split('.')[:-2]) - fullname = '.'.join(fullname.split('.')[-2:]) - else: - modname = '.'.join(fullname.split('.')[:-1]) - fullname = fullname.split('.')[-1] - fullname_new = fullname - except Exception: - logger.warning(f'error_modename:{modname}') - logger.warning(f'error_fullname:{fullname}') - refname = modname - if env.config.viewcode_follow_imported_members: - new_modname = app.emit_firstresult( - 'viewcode-follow-imported', modname, fullname, - ) - if not new_modname: - new_modname = _get_full_modname(app, modname, fullname) - modname = new_modname - # logger.warning(f'new_modename:{modname}') - if not modname: - continue - # fullname = signode.get('fullname') - # if fullname and modname==None: - fullname = fullname_new - if not has_tag(modname, fullname, env.docname, refname): - continue - if fullname in names: - # only one link per name, please - continue - names.add(fullname) - pagename = posixpath.join(OUTPUT_DIRNAME, modname.replace('.', '/')) - signode += viewcode_anchor(reftarget=pagename, refid=fullname, refdoc=env.docname) - - -def env_merge_info(app: Sphinx, env: BuildEnvironment, docnames: Iterable[str], - other: BuildEnvironment) -> None: - if not hasattr(other, '_viewcode_modules'): - return - # create a _viewcode_modules dict on the main environment - if not hasattr(env, '_viewcode_modules'): - env._viewcode_modules = {} # type: ignore - # now merge in the information from the subprocess - for modname, entry in other._viewcode_modules.items(): # type: ignore - if modname not in env._viewcode_modules: # type: ignore - env._viewcode_modules[modname] = entry # type: ignore - else: - if env._viewcode_modules[modname]: # type: ignore - used = env._viewcode_modules[modname][2] # type: ignore - for fullname, docname in entry[2].items(): - if fullname not in used: - used[fullname] = docname - - -def env_purge_doc(app: Sphinx, env: BuildEnvironment, docname: str) -> None: - modules = getattr(env, '_viewcode_modules', {}) - - for modname, entry in list(modules.items()): - if entry is False: - continue - - code, tags, used, refname = entry - for fullname in list(used): - if used[fullname] == docname: - used.pop(fullname) - - if len(used) == 0: - modules.pop(modname) - - -class ViewcodeAnchorTransform(SphinxPostTransform): - """Convert or remove viewcode_anchor nodes depends on builder.""" - default_priority = 100 - - def run(self, **kwargs: Any) -> None: - if is_supported_builder(self.app.builder): - self.convert_viewcode_anchors() - else: - self.remove_viewcode_anchors() - - def convert_viewcode_anchors(self) -> None: - for node in self.document.findall(viewcode_anchor): - anchor = nodes.inline('', _('[源代码]'), classes=['viewcode-link']) - refnode = make_refnode(self.app.builder, node['refdoc'], node['reftarget'], - node['refid'], anchor) - node.replace_self(refnode) - - def remove_viewcode_anchors(self) -> None: - for node in list(self.document.findall(viewcode_anchor)): - node.parent.remove(node) - - -def missing_reference(app: Sphinx, env: BuildEnvironment, node: Element, contnode: Node - ) -> Optional[Node]: - # resolve our "viewcode" reference nodes -- they need special treatment - if node['reftype'] == 'viewcode': - warnings.warn('viewcode extension is no longer use pending_xref node. ' - 'Please update your extension.', RemovedInSphinx50Warning) - return make_refnode(app.builder, node['refdoc'], node['reftarget'], - node['refid'], contnode) - - return None - - -def get_module_filename(app: Sphinx, modname: str) -> Optional[str]: - """Get module filename for *modname*.""" - source_info = app.emit_firstresult('viewcode-find-source', modname) - if source_info: - return None - else: - try: - filename, source = ModuleAnalyzer.get_module_source(modname) - return filename - except Exception: - return None - - -def should_generate_module_page(app: Sphinx, modname: str) -> bool: - """Check generation of module page is needed.""" - module_filename = get_module_filename(app, modname) - if module_filename is None: - # Always (re-)generate module page when module filename is not found. - return True - - builder = cast(StandaloneHTMLBuilder, app.builder) - basename = modname.replace('.', '/') + builder.out_suffix - page_filename = path.join(app.outdir, '_modules/', basename) - - try: - if path.getmtime(module_filename) <= path.getmtime(page_filename): - # generation is not needed if the HTML page is newer than module file. - return False - except IOError: - pass - - return True - - -def collect_pages(app: Sphinx) -> Generator[Tuple[str, Dict[str, Any], str], None, None]: - env = app.builder.env - if not hasattr(env, '_viewcode_modules'): - return - if not is_supported_builder(app.builder): - return - highlighter = app.builder.highlighter # type: ignore - urito = app.builder.get_relative_uri - - modnames = set(env._viewcode_modules) # type: ignore - - for modname, entry in status_iterator( - sorted(env._viewcode_modules.items()), # type: ignore - __('highlighting module code... '), "blue", - len(env._viewcode_modules), # type: ignore - app.verbosity, lambda x: x[0]): - if not entry: - continue - if not should_generate_module_page(app, modname): - continue - - code, tags, used, refname = entry - # construct a page name for the highlighted source - pagename = posixpath.join(OUTPUT_DIRNAME, modname.replace('.', '/')) - # highlight the source using the builder's highlighter - if env.config.highlight_language in ('python3', 'default', 'none'): - lexer = env.config.highlight_language - else: - lexer = 'python' - highlighted = highlighter.highlight_block(code, lexer, linenos=False) - # split the code into lines - lines = highlighted.splitlines() - # split off wrap markup from the first line of the actual code - before, after = lines[0].split('
')
-        lines[0:1] = [before + '
', after]
-        # nothing to do for the last line; it always starts with 
anyway - # now that we have code lines (starting at index 1), insert anchors for - # the collected tags (HACK: this only works if the tag boundaries are - # properly nested!) - maxindex = len(lines) - 1 - for name, docname in used.items(): - type, start, end = tags[name] - backlink = urito(pagename, docname) + '#' + refname + '.' + name - lines[start] = ( - '
%s' % (name, backlink, _('[文档]')) + - lines[start]) - lines[min(end, maxindex)] += '
' - # try to find parents (for submodules) - parents = [] - parent = modname - while '.' in parent: - parent = parent.rsplit('.', 1)[0] - if parent in modnames: - parents.append({ - 'link': urito(pagename, - posixpath.join(OUTPUT_DIRNAME, parent.replace('.', '/'))), - 'title': parent}) - parents.append({'link': urito(pagename, posixpath.join(OUTPUT_DIRNAME, 'index')), - 'title': _('Module code')}) - parents.reverse() - # putting it all together - context = { - 'parents': parents, - 'title': modname, - 'body': (_('

Source code for %s

') % modname + - '\n'.join(lines)), - } - yield (pagename, context, 'page.html') - - if not modnames: - return - - html = ['\n'] - # the stack logic is needed for using nested lists for submodules - stack = [''] - for modname in sorted(modnames): - if modname.startswith(stack[-1]): - stack.append(modname + '.') - html.append('
    ') - else: - stack.pop() - while not modname.startswith(stack[-1]): - stack.pop() - html.append('
') - stack.append(modname + '.') - html.append('
  • %s
  • \n' % ( - urito(posixpath.join(OUTPUT_DIRNAME, 'index'), - posixpath.join(OUTPUT_DIRNAME, modname.replace('.', '/'))), - modname)) - html.append('' * (len(stack) - 1)) - context = { - 'title': _('Overview: module code'), - 'body': (_('

    All modules for which code is available

    ') + - ''.join(html)), - } - - yield (posixpath.join(OUTPUT_DIRNAME, 'index'), context, 'page.html') - - -def setup(app: Sphinx) -> Dict[str, Any]: - app.add_config_value('viewcode_import', None, False) - app.add_config_value('viewcode_enable_epub', False, False) - app.add_config_value('viewcode_follow_imported_members', True, False) - app.connect('doctree-read', doctree_read) - app.connect('env-merge-info', env_merge_info) - app.connect('env-purge-doc', env_purge_doc) - app.connect('html-collect-pages', collect_pages) - app.connect('missing-reference', missing_reference) - # app.add_config_value('viewcode_include_modules', [], 'env') - # app.add_config_value('viewcode_exclude_modules', [], 'env') - app.add_event('viewcode-find-source') - app.add_event('viewcode-follow-imported') - app.add_post_transform(ViewcodeAnchorTransform) - return { - 'version': sphinx.__display_version__, - 'env_version': 1, - 'parallel_read_safe': True - } diff --git a/docs/hub/docs/requirements.txt b/docs/hub/docs/requirements.txt deleted file mode 100644 index a1b6a69f6dbd9c6f78710f56889e14f0e85b27f4..0000000000000000000000000000000000000000 --- a/docs/hub/docs/requirements.txt +++ /dev/null @@ -1,7 +0,0 @@ -sphinx == 4.4.0 -docutils == 0.17.1 -myst-parser == 0.18.1 -sphinx_rtd_theme == 1.0.0 -numpy -IPython -jieba diff --git a/docs/hub/docs/source_en/conf.py b/docs/hub/docs/source_en/conf.py deleted file mode 100644 index 2e93a1054c72cec5b301aacf15617c73dcac8748..0000000000000000000000000000000000000000 --- a/docs/hub/docs/source_en/conf.py +++ /dev/null @@ -1,192 +0,0 @@ -# Configuration file for the Sphinx documentation builder. -# -# This file only contains a selection of the most common options. For a full -# list see the documentation: -# https://www.sphinx-doc.org/en/master/usage/configuration.html - -# -- Path setup -------------------------------------------------------------- - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# -import os -import shutil -import IPython -import re -import sys -import sphinx.ext.autosummary.generate as g -from sphinx.ext import autodoc as sphinx_autodoc - -import mindspore_hub - -# -- Project information ----------------------------------------------------- - -project = 'MindSpore Hub' -copyright = 'MindSpore' -author = 'MindSpore' - -# The full version, including alpha/beta/rc tags -release = 'master' - - -# -- General configuration --------------------------------------------------- - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -myst_enable_extensions = ["dollarmath", "amsmath"] - - -myst_heading_anchors = 5 -extensions = [ - 'sphinx.ext.autodoc', - 'sphinx.ext.doctest', - 'sphinx.ext.intersphinx', - 'sphinx.ext.todo', - 'sphinx.ext.coverage', - 'sphinx.ext.napoleon', - 'sphinx.ext.viewcode', - 'myst_parser', - 'sphinx.ext.mathjax', - 'IPython.sphinxext.ipython_console_highlighting' -] - -source_suffix = { - '.rst': 'restructuredtext', - '.md': 'markdown', -} - -# Add any paths that contain templates here, relative to this directory. -templates_path = ['_templates'] - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This pattern also affects html_static_path and html_extra_path. -mathjax_path = 'https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/mathjax/MathJax-3.2.2/es5/tex-mml-chtml.js' - -mathjax_options = { - 'async':'async' -} - -smartquotes_action = 'De' - -exclude_patterns = [] - -pygments_style = 'sphinx' - -autodoc_inherit_docstrings = False - -# -- Options for HTML output ------------------------------------------------- - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = 'sphinx_rtd_theme' - -import sphinx_rtd_theme -layout_target = os.path.join(os.path.dirname(sphinx_rtd_theme.__file__), 'layout.html') -layout_src = '../../../../resource/_static/layout.html' -if os.path.exists(layout_target): - os.remove(layout_target) -shutil.copy(layout_src, layout_target) - -# Example configuration for intersphinx: refer to the Python standard library. -intersphinx_mapping = { - 'python': ('https://docs.python.org/3', '../../../../resource/python_objects.inv'), -} - -# overwriteautosummary_generate add view source for api and more autosummary class availably. -with open('../_ext/overwriteautosummary_generate.txt', 'r', encoding="utf8") as f: - exec(f.read(), g.__dict__) - -# Modify default signatures for autodoc. -autodoc_source_path = os.path.abspath(sphinx_autodoc.__file__) -autodoc_source_re = re.compile(r'stringify_signature\(.*?\)') -get_param_func_str = r"""\ -import re -import inspect as inspect_ - -def get_param_func(func): - try: - source_code = inspect_.getsource(func) - if func.__doc__: - source_code = source_code.replace(func.__doc__, '') - all_params_str = re.findall(r"def [\w_\d\-]+\(([\S\s]*?)(\):|\) ->.*?:)", source_code) - all_params = re.sub("(self|cls)(,|, )?", '', all_params_str[0][0].replace("\n", "").replace("'", "\"")) - return all_params - except: - return '' - -def get_obj(obj): - if isinstance(obj, type): - return obj.__init__ - - return obj -""" - -with open(autodoc_source_path, "r+", encoding="utf8") as f: - code_str = f.read() - code_str = autodoc_source_re.sub('"(" + get_param_func(get_obj(self.object)) + ")"', code_str, count=0) - exec(get_param_func_str, sphinx_autodoc.__dict__) - exec(code_str, sphinx_autodoc.__dict__) - -# Copy source files of chinese python api from mindpandas repository. -from sphinx.util import logging -logger = logging.getLogger(__name__) - -src_dir_en = os.path.join(os.getenv("HB_PATH"), 'docs/api_python_en') -present_path = os.path.dirname(__file__) - -for i in os.listdir(src_dir_en): - if os.path.isfile(os.path.join(src_dir_en,i)): - if os.path.exists('./'+i): - os.remove('./'+i) - shutil.copy(os.path.join(src_dir_en,i),'./'+i) - else: - if os.path.exists('./'+i): - shutil.rmtree('./'+i) - shutil.copytree(os.path.join(src_dir_en,i),'./'+i) - -# get params for add view source -import json - -if os.path.exists('../../../../tools/generate_html/version.json'): - with open('../../../../tools/generate_html/version.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) -elif os.path.exists('../../../../tools/generate_html/daily_dev.json'): - with open('../../../../tools/generate_html/daily_dev.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) -elif os.path.exists('../../../../tools/generate_html/daily.json'): - with open('../../../../tools/generate_html/daily.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) - -if os.getenv("HB_PATH").split('/')[-1]: - copy_repo = os.getenv("HB_PATH").split('/')[-1] -else: - copy_repo = os.getenv("HB_PATH").split('/')[-2] - -branch = [version_inf[i]['branch'] for i in range(len(version_inf)) if version_inf[i]['name'] == copy_repo][0] -docs_branch = [version_inf[i]['branch'] for i in range(len(version_inf)) if version_inf[i]['name'] == 'tutorials'][0] -cst_module_name = 'mindspore_hub' -repo_whl = 'mindspore_hub' -giturl = 'https://gitee.com/mindspore/' - -sys.path.append(os.path.abspath('../../../../resource/sphinx_ext')) -# import anchor_mod -import nbsphinx_mod - -sys.path.append(os.path.abspath('../../../../resource/search')) -import search_code - -sys.path.append(os.path.abspath('../../../../resource/custom_directives')) -from custom_directives import IncludeCodeDirective - -def setup(app): - app.add_directive('includecode', IncludeCodeDirective) - app.add_config_value('docs_branch', '', True) - app.add_config_value('branch', '', True) - app.add_config_value('cst_module_name', '', True) - app.add_config_value('copy_repo', '', True) - app.add_config_value('giturl', '', True) - app.add_config_value('repo_whl', '', True) diff --git a/docs/hub/docs/source_en/hub_installation.md b/docs/hub/docs/source_en/hub_installation.md deleted file mode 100644 index 34c2c8777dc8b6abceb76b17b52aee805eb52199..0000000000000000000000000000000000000000 --- a/docs/hub/docs/source_en/hub_installation.md +++ /dev/null @@ -1,116 +0,0 @@ -# MindSpore Hub Installation - -- [MindSpore Hub Installation](#mindspore-hub-installation) - - [System Environment Information Confirmation](#system-environment-information-confirmation) - - [Installation Methods](#installation-methods) - - [Installation by pip](#installation-by-pip) - - [Installation by Source Code](#installation-by-source-code) - - [Installation Verification](#installation-verification) - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/hub/docs/source_en/hub_installation.md) - -## System Environment Information Confirmation - -- The hardware platform supports Ascend, GPU and CPU. -- Confirm that [Python](https://www.python.org/ftp/python/3.7.5/Python-3.7.5.tgz) 3.7.5 is installed. -- The versions of MindSpore Hub and MindSpore must be consistent. -- MindSpore Hub supports only Linux distro with x86 architecture 64-bit or ARM architecture 64-bit. -- When the network is connected, dependency items in the `setup.py` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items. - -## Installation Methods - -You can install MindSpore Hub either by pip or by source code. - -### Installation by pip - -Install MindSpore Hub using `pip` command. `hub` depends on the MindSpore version used in current environment. - -Download and install the MindSpore Hub whl package in [Release List](https://www.mindspore.cn/versions/en). - -```shell -pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/Hub/any/mindspore_hub-{version}-py3-none-any.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple -``` - -> - `{version}` denotes the version of MindSpore Hub. For example, when you are downloading MindSpore Hub 1.3.0, `{version}` should be 1.3.0. - -### Installation by Source Code - -1. Download source code from Gitee. - - ```bash - git clone https://gitee.com/mindspore/hub.git - ``` - -2. Compile and install in MindSpore Hub directory. - - ```bash - cd hub - python setup.py install - ``` - -## Installation Verification - -Run the following command in a network-enabled environment to verify the installation. - -```python -import mindspore_hub as mshub - -model = mshub.load("mindspore/1.6/lenet_mnist", num_class=10) -``` - -If it prompts the following information, the installation is successful: - -```text -Downloading data from url https://gitee.com/mindspore/hub/raw/master/mshub_res/assets/mindspore/1.6/lenet_mnist.md - -Download finished! -File size = 0.00 Mb -Checking /home/ma-user/.mscache/mindspore/1.6/lenet_mnist.md...Passed! -``` - -## FAQ - -**Q: What to do when `SSL: CERTIFICATE_VERIFY_FAILED` occurs?** - -A: Due to your network environment, for example, if you use a proxy to connect to the Internet, SSL verification failure may occur on Python because of incorrect certificate configuration. In this case, you can use either of the following methods to solve this problem: - -- Configure the SSL certificate **(recommended)**. -- Before import mindspore_hub, please add the codes (the fastest method). - - ```python - import ssl - ssl._create_default_https_context = ssl._create_unverified_context - - import mindspore_hub as mshub - model = mshub.load("mindspore/1.6/lenet_mnist", num_classes=10) - ``` - -**Q: What to do when `No module named src.*` occurs**? - -A: When you use mindspore_hub.load to load differenet models in the same process, because the model file path needs to be inserted into sys.path. Test results show that Python only looks for src.* in the first inserted path. It's no use to delete the first inserted path. To solve the problem, you can copy all model files to the working directory. The code is as follows: - -```python -# mindspore_hub_install_path/load.py -def _copy_all_file_to_target_path(path, target_path): - if not os.path.exists(target_path): - os.makedirs(target_path) - path = os.path.realpath(path) - target_path = os.path.realpath(target_path) - for p in os.listdir(path): - copy_path = os.path.join(path, p) - target_dir = os.path.join(target_path, p) - _delete_if_exist(target_dir) - if os.path.isdir(copy_path): - _copy_all_file_to_target_path(copy_path, target_dir) - else: - shutil.copy(copy_path, target_dir) - -def _get_network_from_cache(name, path, *args, **kwargs): - _copy_all_file_to_target_path(path, os.getcwd()) - config_path = os.path.join(os.getcwd(), HUB_CONFIG_FILE) - if not os.path.exists(config_path): - raise ValueError('{} not exists.'.format(config_path)) - ...... -``` - -**Note**: Some files of the previous model may be replaced when the next model is loaded. However, necessary model files must exist during model training. Therefore, you must finish training the previous model before the next model loads. diff --git a/docs/hub/docs/source_en/index.rst b/docs/hub/docs/source_en/index.rst deleted file mode 100644 index 05e0354925b3b66fa6fd722ed76727badb133988..0000000000000000000000000000000000000000 --- a/docs/hub/docs/source_en/index.rst +++ /dev/null @@ -1,69 +0,0 @@ -MindSpore Hub Documents -========================= - -MindSpore Hub is a pre-trained model application tool of the MindSpore ecosystem. It provides the following functions: - -- Plug-and-play model loading -- Easy-to-use transfer learning - -.. code-block:: - - import mindspore - import mindspore_hub as mshub - from mindspore import set_context, GRAPH_MODE - - set_context(mode=GRAPH_MODE, - device_target="Ascend", - device_id=0) - - model = "mindspore/1.6/googlenet_cifar10" - - # Initialize the number of classes based on the pre-trained model. - network = mshub.load(model, num_classes=10) - network.set_train(False) - - # ... - -Code repository address: - -Typical Application Scenarios --------------------------------------------- - -1. `Inference Validation `_ - - With only one line of code, use mindspore_hub.load to load the pre-trained model. - -2. `Transfer Learning `_ - - After loading models using mindspore_hub.load, add an extra argument to load the feature extraction of the neural network. This makes it easier to add new layers for transfer learning. - -3. `Model Releasing `_ - - Release the trained model to MindSpore Hub according to the specified procedure for download and use. - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: Installation - - hub_installation - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: Guide - - loading_model_from_hub - publish_model - -.. toctree:: - :maxdepth: 1 - :caption: API References - - hub - -.. toctree:: - :maxdepth: 1 - :caption: Models - - MindSpore Hub↗ diff --git a/docs/hub/docs/source_en/loading_model_from_hub.md b/docs/hub/docs/source_en/loading_model_from_hub.md deleted file mode 100644 index e7db7856ffbdd4ed9e0ea372460b365505b46fff..0000000000000000000000000000000000000000 --- a/docs/hub/docs/source_en/loading_model_from_hub.md +++ /dev/null @@ -1,212 +0,0 @@ -# Loading the Model from Hub - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/hub/docs/source_en/loading_model_from_hub.md) - -## Overview - -For individual developers, training a better model from scratch requires a lot of well-labeled data, sufficient computational resources, and a lot of training and debugging time. It makes model training very resource-consuming and raises the threshold of AI development. To solve the above problems, MindSpore Hub provides a lot of model weight files with completed training, which can enable developers to quickly train a better model with a small amount of data and only a small amount of training time. - -This document demonstrates the use of the models provided by MindSpore Hub for both inference verification and migration learning, and shows how to quickly complete training with a small amount of data to get a better model. - -## For Inference Validation - -`mindspore_hub.load` API is used to load the pre-trained model in a single line of code. The main process of model loading is as follows: - -1. Search the model of interest on [MindSpore Hub Website](https://www.mindspore.cn/hub). - - For example, if you aim to perform image classification on CIFAR-10 dataset using GoogleNet, please search on [MindSpore Hub Website](https://www.mindspore.cn/hub) with the keyword `GoogleNet`. Then all related models will be returned. Once you enter into the related model page, you can find the `Usage`. **Notices**: if the model page doesn't have `Usage`, it means that the current model does not support loading with MindSpore Hub temporarily. - -2. Complete the task of loading model according to the `Usage` , as shown in the example below: - - ```python - import mindspore_hub as mshub - import mindspore - from mindspore import Tensor, nn, Model, set_context, GRAPH_MODE - from mindspore import dtype as mstype - import mindspore.dataset.vision as vision - - set_context(mode=GRAPH_MODE, - device_target="Ascend", - device_id=0) - - model = "mindspore/1.6/googlenet_cifar10" - - # Initialize the number of classes based on the pre-trained model. - network = mshub.load(model, num_classes=10) - network.set_train(False) - - # ... - - ``` - -3. After loading the model, you can use MindSpore to do inference. You can refer to [Multi-Platform Inference Overview](https://www.mindspore.cn/tutorials/en/master/model_infer/ms_infer/llm_inference_overview.html). - -## For Transfer Training - -When loading a model with `mindspore_hub.load` API, we can add an extra argument to load the feature extraction part of the model only. So we can easily add new layers to perform transfer learning. This feature can be found in the related model page when an extra argument (e.g., include_top) has been integrated into the model construction by the model developer. The value of `include_top` is True or False, indicating whether to keep the top layer in the fully-connected network. - -We use [MobileNetV2](https://gitee.com/mindspore/models/tree/master/research/cv/centerface) as an example to illustrate how to load a model trained on the ImageNet dataset and then perform transfer learning (re-training) on a specific sub-task dataset. The main steps are listed below: - -1. Search the model of interest on [MindSpore Hub Website](https://www.mindspore.cn/hub) and find the corresponding `Usage`. - -2. Load the model from MindSpore Hub using the `Usage`. Note that the parameter `include_top` is provided by the model developer. - - ```python - import os - import mindspore_hub as mshub - import mindspore - from mindspore import Tensor, nn, set_context, GRAPH_MODE, train - from mindspore.nn import Momentum - from mindspore import save_checkpoint, load_checkpoint,load_param_into_net - from mindspore import ops - import mindspore.dataset as ds - import mindspore.dataset.transforms as transforms - import mindspore.dataset.vision as vision - from mindspore import dtype as mstype - from mindspore import Model - set_context(mode=GRAPH_MODE, device_target="Ascend", device_id=0) - - model = "mindspore/1.6/mobilenetv2_imagenet2012" - network = mshub.load(model, num_classes=500, include_top=False, activation="Sigmoid") - network.set_train(False) - ``` - -3. Add a new classification layer into current model architecture. - - ```python - class ReduceMeanFlatten(nn.Cell): - def __init__(self): - super(ReduceMeanFlatten, self).__init__() - self.mean = ops.ReduceMean(keep_dims=True) - self.flatten = nn.Flatten() - - def construct(self, x): - x = self.mean(x, (2, 3)) - x = self.flatten(x) - return x - - # Check MindSpore Hub website to conclude that the last output shape is 1280. - last_channel = 1280 - - # The number of classes in target task is 10. - num_classes = 10 - - reducemean_flatten = ReduceMeanFlatten() - - classification_layer = nn.Dense(last_channel, num_classes) - classification_layer.set_train(True) - - train_network = nn.SequentialCell([network, reducemean_flatten, classification_layer]) - ``` - -4. Define `dataset_loader`. - - As shown below, the new dataset used for fine-tuning is the [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html). It is noted here we need to download the `binary version` dataset. After downloading and decompression, the following code can be used for data loading and processing. It is noted the `dataset_path` is the path to the dataset and should be given by the user. - - ```python - def create_cifar10dataset(dataset_path, batch_size, usage='train', shuffle=True): - data_set = ds.Cifar10Dataset(dataset_dir=dataset_path, usage=usage, shuffle=shuffle) - - # define map operations - trans = [ - vision.Resize((256, 256)), - vision.RandomHorizontalFlip(prob=0.5), - vision.Rescale(1.0 / 255.0, 0.0), - vision.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]), - vision.HWC2CHW() - ] - - type_cast_op = transforms.TypeCast(mstype.int32) - - data_set = data_set.map(operations=type_cast_op, input_columns="label", num_parallel_workers=8) - data_set = data_set.map(operations=trans, input_columns="image", num_parallel_workers=8) - - # apply batch operations - data_set = data_set.batch(batch_size, drop_remainder=True) - return data_set - - # Create Dataset - dataset_path = "/path_to_dataset/cifar-10-batches-bin" - dataset = create_cifar10dataset(dataset_path, batch_size=32, usage='train', shuffle=True) - ``` - -5. Define `loss`, `optimizer` and `learning rate`. - - ```python - def generate_steps_lr(lr_init, steps_per_epoch, total_epochs): - total_steps = total_epochs * steps_per_epoch - decay_epoch_index = [0.3*total_steps, 0.6*total_steps, 0.8*total_steps] - lr_each_step = [] - for i in range(total_steps): - if i < decay_epoch_index[0]: - lr = lr_init - elif i < decay_epoch_index[1]: - lr = lr_init * 0.1 - elif i < decay_epoch_index[2]: - lr = lr_init * 0.01 - else: - lr = lr_init * 0.001 - lr_each_step.append(lr) - return lr_each_step - - # Set epoch size - epoch_size = 60 - - # Wrap the backbone network with loss. - loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean") - loss_net = nn.WithLossCell(train_network, loss_fn) - steps_per_epoch = dataset.get_dataset_size() - lr = generate_steps_lr(lr_init=0.01, steps_per_epoch=steps_per_epoch, total_epochs=epoch_size) - - # Create an optimizer. - optim = Momentum(filter(lambda x: x.requires_grad, classification_layer.get_parameters()), Tensor(lr, mindspore.float32), 0.9, 4e-5) - train_net = nn.TrainOneStepCell(loss_net, optim) - ``` - -6. Start fine-tuning. - - ```python - for epoch in range(epoch_size): - for i, items in enumerate(dataset): - data, label = items - data = mindspore.Tensor(data) - label = mindspore.Tensor(label) - - loss = train_net(data, label) - print(f"epoch: {epoch}/{epoch_size}, loss: {loss}") - # Save the ckpt file for each epoch. - if not os.path.exists('ckpt'): - os.mkdir('ckpt') - ckpt_path = f"./ckpt/cifar10_finetune_epoch{epoch}.ckpt" - save_checkpoint(train_network, ckpt_path) - ``` - -7. Eval on test set. - - ```python - model = "mindspore/1.6/mobilenetv2_imagenet2012" - - network = mshub.load(model, num_classes=500, pretrained=True, include_top=False, activation="Sigmoid") - network.set_train(False) - reducemean_flatten = ReduceMeanFlatten() - classification_layer = nn.Dense(last_channel, num_classes) - classification_layer.set_train(False) - softmax = nn.Softmax() - network = nn.SequentialCell([network, reducemean_flatten, classification_layer, softmax]) - - # Load a pre-trained ckpt file. - ckpt_path = "./ckpt/cifar10_finetune_epoch59.ckpt" - trained_ckpt = load_checkpoint(ckpt_path) - load_param_into_net(classification_layer, trained_ckpt) - - loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean") - - # Define loss and create model. - eval_dataset = create_cifar10dataset(dataset_path, batch_size=32, do_train=False) - eval_metrics = {'Loss': train.Loss(), - 'Top1-Acc': train.Top1CategoricalAccuracy(), - 'Top5-Acc': train.Top5CategoricalAccuracy()} - model = Model(network, loss_fn=loss, optimizer=None, metrics=eval_metrics) - metrics = model.eval(eval_dataset) - print("metric: ", metrics) - ``` diff --git a/docs/hub/docs/source_en/publish_model.md b/docs/hub/docs/source_en/publish_model.md deleted file mode 100644 index 1b7a36dd740db8d83de3ce015433be0fa23fa404..0000000000000000000000000000000000000000 --- a/docs/hub/docs/source_en/publish_model.md +++ /dev/null @@ -1,74 +0,0 @@ -# Publishing Models Using MindSpore Hub - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/hub/docs/source_en/publish_model.md) - -## Overview - -[MindSpore Hub](https://www.mindspore.cn/hub) is a platform for storing pre-trained models provided by MindSpore or third-party developers. It provides application developers with simple model loading and fine-tuning APIs, which enables the users to perform inference or fine-tuning based on the pre-trained models and thus deploy to their own applications. Users can also submit their pre-trained models into MindSpore Hub following the specific steps. Thus other users can download and use the published models. - -This tutorial uses GoogleNet as an example to describe how to submit models for model developers who are interested in publishing models into MindSpore Hub. - -## How to Publish Models - -You can publish models to MindSpore Hub via PR in [hub](https://gitee.com/mindspore/hub) repo. Here we use GoogleNet as an example to list the steps of model submission to MindSpore Hub. - -1. Host your pre-trained model in a storage location where we are able to access. - -2. Add a model generation python file called `mindspore_hub_conf.py` in your own repo using this [template](https://gitee.com/mindspore/models/blob/master/research/cv/SE_ResNeXt50/mindspore_hub_conf.py). The location of the `mindspore_hub_conf.py` file is shown below: - - ```text - googlenet - ├── src - │   ├── googlenet.py - ├── script - │   ├── run_train.sh - ├── train.py - ├── test.py - ├── mindspore_hub_conf.py - ``` - -3. Create a `{model_name}_{dataset}.md` file in `hub/mshub_res/assets/mindspore/1.6` using this [template](https://gitee.com/mindspore/hub/blob/master/mshub_res/assets/mindspore/1.6/googlenet_cifar10.md#). Here `1.6` indicates the MindSpore version. The structure of the `hub/mshub_res` folder is as follows: - - ```text - hub - ├── mshub_res - │   ├── assets - │   ├── mindspore - │ ├── 1.6 - │ ├── googlenet_cifar10.md - │   ├── tools - │ ├── get_sha256.py - │ ├── load_markdown.py - │ └── md_validator.py - ``` - - Note that it is required to fill in the `{model_name}_{dataset}.md` template by providing `file-format`, `asset-link` and `asset-sha256` below, which refers to the model file format, model storage location from step 1 and model hash value, respectively. - - ```text - file-format: ckpt - asset-link: https://download.mindspore.cn/models/r1.6/googlenet_ascend_v160_cifar10_official_cv_acc92.53.ckpt - asset-sha256: b2f7fe14782a3ab88ad3534ed5f419b4bbc3b477706258bd6ed8f90f529775e7 - ``` - - The MindSpore Hub supports multiple model file formats including: - - [MindSpore CKPT](https://www.mindspore.cn/tutorials/en/master/beginner/save_load.html#saving-and-loading-the-model) - - [MindIR](https://www.mindspore.cn/tutorials/en/master/beginner/save_load.html) - - [AIR](https://www.mindspore.cn/docs/en/master/api_python/mindspore/mindspore.export.html) - - [ONNX](https://www.mindspore.cn/docs/en/master/api_python/mindspore/mindspore.export.html) - - For each pre-trained model, please run the following command to obtain a hash value required at `asset-sha256` of this `.md` file. Here the pre-trained model `googlenet.ckpt` is accessed from the storage location in step 1 and then saved in `tools` folder. The output hash value is: `b2f7fe14782a3ab88ad3534ed5f419b4bbc3b477706258bd6ed8f90f529775e7`. - - ```bash - cd /hub/mshub_res/tools - python get_sha256.py --file ../googlenet.ckpt - ``` - -4. Check the format of the markdown file locally using `hub/mshub_res/tools/md_validator.py` by running the following command. The output is `All Passed`, which indicates that the format and content of the `.md` file meets the requirements. - - ```bash - python md_validator.py --check_path ../assets/mindspore/1.6/googlenet_cifar10.md - ``` - -5. Create a PR in `mindspore/hub` repo. See our [Contributor Wiki](https://gitee.com/mindspore/mindspore/blob/master/CONTRIBUTING.md#) for more information about creating a PR. - -Once your PR is merged into master branch here, your model will show up in [MindSpore Hub Website](https://www.mindspore.cn/hub) within 24 hours. Please refer to [README](https://gitee.com/mindspore/hub/blob/master/mshub_res/README.md#) for more information about model submission. diff --git a/docs/hub/docs/source_zh_cn/conf.py b/docs/hub/docs/source_zh_cn/conf.py deleted file mode 100644 index 5133b2584a7f3ca2dd441d596e3ad37e9315f11e..0000000000000000000000000000000000000000 --- a/docs/hub/docs/source_zh_cn/conf.py +++ /dev/null @@ -1,224 +0,0 @@ -# Configuration file for the Sphinx documentation builder. -# -# This file only contains a selection of the most common options. For a full -# list see the documentation: -# https://www.sphinx-doc.org/en/master/usage/configuration.html - -# -- Path setup -------------------------------------------------------------- - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# -import os -import IPython -import re -import sys -from sphinx.ext import autodoc as sphinx_autodoc -import shutil - -import mindspore_hub - -# -- Project information ----------------------------------------------------- - -project = 'MindSpore Hub' -copyright = 'MindSpore' -author = 'MindSpore' - -# The full version, including alpha/beta/rc tags -release = 'master' - - -# -- General configuration --------------------------------------------------- - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -myst_enable_extensions = ["dollarmath", "amsmath"] - - -myst_heading_anchors = 5 -extensions = [ - 'sphinx.ext.autodoc', - 'sphinx.ext.doctest', - 'sphinx.ext.intersphinx', - 'sphinx.ext.todo', - 'sphinx.ext.coverage', - 'sphinx.ext.napoleon', - 'sphinx.ext.viewcode', - 'myst_parser', - 'sphinx.ext.mathjax', - 'IPython.sphinxext.ipython_console_highlighting' -] - -source_suffix = { - '.rst': 'restructuredtext', - '.md': 'markdown', -} - -# Add any paths that contain templates here, relative to this directory. -templates_path = ['_templates'] - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This pattern also affects html_static_path and html_extra_path. -mathjax_path = 'https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/mathjax/MathJax-3.2.2/es5/tex-mml-chtml.js' - -mathjax_options = { - 'async':'async' -} - -smartquotes_action = 'De' - -exclude_patterns = [] - -pygments_style = 'sphinx' - -autodoc_inherit_docstrings = False - -# -- Options for HTML output ------------------------------------------------- - -# Reconstruction of sphinx auto generated document translation. -language = 'zh_CN' -locale_dirs = ['../../../../resource/locale/'] -gettext_compact = False - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = 'sphinx_rtd_theme' - -html_search_language = 'zh' - -html_search_options = {'dict': '../../../resource/jieba.txt'} - -# Example configuration for intersphinx: refer to the Python standard library. -intersphinx_mapping = { - 'python': ('https://docs.python.org/3', '../../../../resource/python_objects.inv'), -} - -from sphinx import directives -with open('../_ext/overwriteobjectiondirective.txt', 'r', encoding="utf8") as f: - exec(f.read(), directives.__dict__) - -from sphinx.ext import viewcode -with open('../_ext/overwriteviewcode.txt', 'r', encoding="utf8") as f: - exec(f.read(), viewcode.__dict__) - -# Modify default signatures for autodoc. -autodoc_source_path = os.path.abspath(sphinx_autodoc.__file__) -autodoc_source_re = re.compile(r'stringify_signature\(.*?\)') -get_param_func_str = r"""\ -import re -import inspect as inspect_ - -def get_param_func(func): - try: - source_code = inspect_.getsource(func) - if func.__doc__: - source_code = source_code.replace(func.__doc__, '') - all_params_str = re.findall(r"def [\w_\d\-]+\(([\S\s]*?)(\):|\) ->.*?:)", source_code) - all_params = re.sub("(self|cls)(,|, )?", '', all_params_str[0][0].replace("\n", "").replace("'", "\"")) - return all_params - except: - return '' - -def get_obj(obj): - if isinstance(obj, type): - return obj.__init__ - - return obj -""" - -with open(autodoc_source_path, "r+", encoding="utf8") as f: - code_str = f.read() - code_str = autodoc_source_re.sub('"(" + get_param_func(get_obj(self.object)) + ")"', code_str, count=0) - exec(get_param_func_str, sphinx_autodoc.__dict__) - exec(code_str, sphinx_autodoc.__dict__) - -# Copy source files of chinese python api from hub repository. -from sphinx.util import logging -logger = logging.getLogger(__name__) - -copy_path = 'docs/api_python' -src_dir = os.path.join(os.getenv("HB_PATH"), copy_path) - -copy_list = [] - -present_path = os.path.dirname(__file__) - -for i in os.listdir(src_dir): - if os.path.isfile(os.path.join(src_dir,i)): - if os.path.exists('./'+i): - os.remove('./'+i) - shutil.copy(os.path.join(src_dir,i),'./'+i) - copy_list.append(os.path.join(present_path,i)) - else: - if os.path.exists('./'+i): - shutil.rmtree('./'+i) - shutil.copytree(os.path.join(src_dir,i),'./'+i) - copy_list.append(os.path.join(present_path,i)) - -# add view -import json - -if os.path.exists('../../../../tools/generate_html/version.json'): - with open('../../../../tools/generate_html/version.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) -elif os.path.exists('../../../../tools/generate_html/daily_dev.json'): - with open('../../../../tools/generate_html/daily_dev.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) -elif os.path.exists('../../../../tools/generate_html/daily.json'): - with open('../../../../tools/generate_html/daily.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) - -if os.getenv("HB_PATH").split('/')[-1]: - copy_repo = os.getenv("HB_PATH").split('/')[-1] -else: - copy_repo = os.getenv("HB_PATH").split('/')[-2] - -branch = [version_inf[i]['branch'] for i in range(len(version_inf)) if version_inf[i]['name'] == copy_repo][0] -docs_branch = [version_inf[i]['branch'] for i in range(len(version_inf)) if version_inf[i]['name'] == 'tutorials'][0] - -re_view = f"\n.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/{docs_branch}/" + \ - f"resource/_static/logo_source.svg\n :target: https://gitee.com/mindspore/{copy_repo}/blob/{branch}/" - -for cur, _, files in os.walk(present_path): - for i in files: - flag_copy = 0 - if i.endswith('.rst'): - for j in copy_list: - if j in cur: - flag_copy = 1 - break - if os.path.join(cur, i) in copy_list or flag_copy: - try: - with open(os.path.join(cur, i), 'r+', encoding='utf-8') as f: - content = f.read() - new_content = content - if '.. include::' in content and '.. automodule::' in content: - continue - if 'autosummary::' not in content and "\n=====" in content: - re_view_ = re_view + copy_path + cur.split(present_path)[-1] + '/' + i + \ - '\n :alt: 查看源文件\n\n' - new_content = re.sub('([=]{5,})\n', r'\1\n' + re_view_, content, 1) - if new_content != content: - f.seek(0) - f.truncate() - f.write(new_content) - except Exception: - print(f'打开{i}文件失败') - - -sys.path.append(os.path.abspath('../../../../resource/sphinx_ext')) -# import anchor_mod -import nbsphinx_mod - -sys.path.append(os.path.abspath('../../../../resource/search')) -import search_code - -sys.path.append(os.path.abspath('../../../../resource/custom_directives')) -from custom_directives import IncludeCodeDirective - -def setup(app): - app.add_directive('includecode', IncludeCodeDirective) diff --git a/docs/hub/docs/source_zh_cn/hub_installation.md b/docs/hub/docs/source_zh_cn/hub_installation.md deleted file mode 100644 index c2641d7577eb939cec863dc331cbdda598a8f467..0000000000000000000000000000000000000000 --- a/docs/hub/docs/source_zh_cn/hub_installation.md +++ /dev/null @@ -1,114 +0,0 @@ -# 安装MindSpore Hub - -- [安装MindSpore Hub](#安装mindspore-hub) - - [确认系统环境信息](#确认系统环境信息) - - [安装方式](#安装方式) - - [pip安装](#pip安装) - - [源码安装](#源码安装) - - [验证是否成功安装](#验证是否成功安装) - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/hub/docs/source_zh_cn/hub_installation.md) - -## 确认系统环境信息 - -- 硬件平台支持Ascend、GPU和CPU。 -- 确认安装[Python](https://www.python.org/ftp/python/3.7.5/Python-3.7.5.tgz) 3.7.5版本。 -- MindSpore Hub与MindSpore的版本需保持一致。 -- MindSpore Hub支持使用x86 64位或ARM 64位架构的Linux发行版系统。 -- 在联网状态下,安装whl包时会自动下载`setup.py`中的依赖项,其余情况需自行安装。 - -## 安装方式 - -可以采用pip安装或者源码安装两种方式。 - -### pip安装 - -下载并安装[发布版本列表](https://www.mindspore.cn/versions)中的MindSpore Hub whl包。 - -```shell -pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/Hub/any/mindspore_hub-{version}-py3-none-any.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple -``` - -> - `{version}`表示MindSpore Hub版本号,例如下载1.3.0版本MindSpore Hub时,`{version}`应写为1.3.0。 - -### 源码安装 - -1. 从Gitee下载源码。 - - ```bash - git clone https://gitee.com/mindspore/hub.git - ``` - -2. 编译安装MindSpore Hub。 - - ```bash - cd hub - python setup.py install - ``` - -## 验证是否成功安装 - -在能联网的环境中执行以下命令,验证安装结果。 - -```python -import mindspore_hub as mshub - -model = mshub.load("mindspore/1.6/lenet_mnist", num_class=10) -``` - -如果出现下列提示,说明安装成功: - -```text -Downloading data from url https://gitee.com/mindspore/hub/raw/master/mshub_res/assets/mindspore/1.6/lenet_mnist.md - -Download finished! -File size = 0.00 Mb -Checking /home/ma-user/.mscache/mindspore/1.6/lenet_mnist.md...Passed! -``` - -## FAQ - -**Q: 遇到`SSL: CERTIFICATE_VERIFY_FAILED`怎么办?** - -A: 由于你的网络环境,例如你使用代理连接互联网,往往会由于证书配置问题导致python出现ssl verification failed的问题,此时有两种解决方法: - -- 配置好SSL证书 **(推荐)** -- 在加载mindspore_hub前增加如下代码进行解决(最快) - - ```python - import ssl - ssl._create_default_https_context = ssl._create_unverified_context - - import mindspore_hub as mshub - model = mshub.load("mindspore/1.6/lenet_mnist", num_classes=10) - ``` - -**Q: 遇到`No module named src.*`怎么办?** - -A: 同一进程中使用load接口加载不同的模型,由于每次加载模型需要将模型文件目录插入到环境变量中,经测试发现:Python只会去最开始插入的目录下查找src.*,尽管你将最开始插入的目录删除,Python还是会去这个目录下查找。解决办法:不添加环境变量,将模型目录下的所有文件都复制到当前工作目录下。代码如下: - -```python -# mindspore_hub_install_path/load.py -def _copy_all_file_to_target_path(path, target_path): - if not os.path.exists(target_path): - os.makedirs(target_path) - path = os.path.realpath(path) - target_path = os.path.realpath(target_path) - for p in os.listdir(path): - copy_path = os.path.join(path, p) - target_dir = os.path.join(target_path, p) - _delete_if_exist(target_dir) - if os.path.isdir(copy_path): - _copy_all_file_to_target_path(copy_path, target_dir) - else: - shutil.copy(copy_path, target_dir) - -def _get_network_from_cache(name, path, *args, **kwargs): - _copy_all_file_to_target_path(path, os.getcwd()) - config_path = os.path.join(os.getcwd(), HUB_CONFIG_FILE) - if not os.path.exists(config_path): - raise ValueError('{} not exists.'.format(config_path)) - ...... -``` - -**注意**:在load后一个模型时可能会将前一个模型的一些文件替换掉,但是模型训练需保证必要模型文件存在,你必须在加载新模型之前完成对前一个模型的训练。 diff --git a/docs/hub/docs/source_zh_cn/index.rst b/docs/hub/docs/source_zh_cn/index.rst deleted file mode 100644 index b42e62220939dfd976ab808a619580b973a79c50..0000000000000000000000000000000000000000 --- a/docs/hub/docs/source_zh_cn/index.rst +++ /dev/null @@ -1,71 +0,0 @@ -MindSpore Hub 文档 -========================= - -MindSpore Hub是MindSpore生态的预训练模型应用工具。 - -MindSpore Hub包含以下功能: - -- 即插即用的模型加载 -- 简单易用的迁移学习 - -.. code-block:: - - import mindspore - import mindspore_hub as mshub - from mindspore import set_context, GRAPH_MODE - - set_context(mode=GRAPH_MODE, - device_target="Ascend", - device_id=0) - - model = "mindspore/1.6/googlenet_cifar10" - - # Initialize the number of classes based on the pre-trained model. - network = mshub.load(model, num_classes=10) - network.set_train(False) - - # ... - -代码仓地址: - -使用MindSpore Hub的典型场景 ----------------------------- - -1. `推理验证 `_ - - mindspore_hub.load用于加载预训练模型,可以实现一行代码完成模型的加载。 - -2. `迁移学习 `_ - - 通过mindspore_hub.load完成模型加载后,可以增加一个额外的参数项只加载神经网络的特征提取部分,这样就能很容易地在之后增加一些新的层进行迁移学习。 - -3. `发布模型 `_ - - 可以将自己训练好的模型按照指定的步骤发布到MindSpore Hub中,以供其他用户进行下载和使用。 - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: 安装部署 - - hub_installation - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: 使用指南 - - loading_model_from_hub - publish_model - -.. toctree:: - :maxdepth: 1 - :caption: API参考 - - hub - -.. toctree:: - :maxdepth: 1 - :caption: 模型 - - MindSpore Hub↗ diff --git a/docs/hub/docs/source_zh_cn/loading_model_from_hub.md b/docs/hub/docs/source_zh_cn/loading_model_from_hub.md deleted file mode 100644 index 68b94eb6829e79806864c8d2c5251a9c075c3258..0000000000000000000000000000000000000000 --- a/docs/hub/docs/source_zh_cn/loading_model_from_hub.md +++ /dev/null @@ -1,212 +0,0 @@ -# 从Hub加载模型 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/hub/docs/source_zh_cn/loading_model_from_hub.md) - -## 概述 - -对于个人开发者来说,从零开始训练一个较好模型,需要大量的标注完备的数据、足够的计算资源和大量训练调试时间。使得模型训练非常消耗资源,提升了AI开发的门槛,针对以上问题,MindSpore Hub提供了很多训练完成的模型权重文件,可以使得开发者在拥有少量数据的情况下,只需要花费少量训练时间,即可快速训练出一个较好的模型。 - -本文档从推理验证和迁移学习两种用途,展示使用MindSpore Hub提供的模型,用少量数据快速完成训练得到较好的模型。 - -## 用于推理验证 - -`mindspore_hub.load` API用于加载预训练模型,可以实现一行代码完成模型的加载。主要的模型加载流程如下: - -1. 在[MindSpore Hub官网](https://www.mindspore.cn/hub)上搜索感兴趣的模型。 - - 例如,想使用GoogleNet对CIFAR-10数据集进行分类,可以在[MindSpore Hub官网](https://www.mindspore.cn/hub)上使用关键词`GoogleNet`进行搜索。页面将会返回与GoogleNet相关的所有模型。进入相关模型页面之后,查看`Usage`。**注意**:如果页面没有`Usage`表示当前模型暂不支持使用MindSpore Hub加载。 - -2. 根据`Usage`完成模型的加载,示例代码如下: - - ```python - import mindspore_hub as mshub - import mindspore - from mindspore import Tensor, nn, Model, set_context, GRAPH_MODE - from mindspore import dtype as mstype - import mindspore.dataset.vision as vision - - set_context(mode=GRAPH_MODE, - device_target="Ascend", - device_id=0) - - model = "mindspore/1.6/googlenet_cifar10" - - # Initialize the number of classes based on the pre-trained model. - network = mshub.load(model, num_classes=10) - network.set_train(False) - - # ... - - ``` - -3. 完成模型加载后,可以使用MindSpore进行推理,参考[推理模型总览](https://www.mindspore.cn/tutorials/zh-CN/master/model_infer/ms_infer/llm_inference_overview.html)。 - -## 用于迁移学习 - -通过`mindspore_hub.load`完成模型加载后,可以增加一个额外的参数项只加载神经网络的特征提取部分,这样我们就能很容易地在之后增加一些新的层进行迁移学习。当模型开发者将额外的参数(例如 `include_top`)添加到模型构造中时,可以在模型的详情页中找到这个功能。`include_top`取值为True或者False,表示是否保留顶层的全连接网络。* - -下面我们以[MobileNetV2](https://gitee.com/mindspore/models/tree/master/research/cv/centerface)为例,说明如何加载一个基于OpenImage的预训练模型,并在特定的子任务数据集上进行迁移学习(重训练)。主要的步骤如下: - -1. 在[MindSpore Hub官网](https://www.mindspore.cn/hub)上搜索感兴趣的模型,查看对应的`Usage`。 - -2. 根据`Usage`进行MindSpore Hub模型的加载,注意:`include_top`参数需要模型开发者提供。 - - ```python - import os - import mindspore_hub as mshub - import mindspore - from mindspore import Tensor, nn, set_context, GRAPH_MODE, train - from mindspore.nn import Momentum - from mindspore import save_checkpoint, load_checkpoint,load_param_into_net - from mindspore import ops - import mindspore.dataset as ds - import mindspore.dataset.transforms as transforms - import mindspore.dataset.vision as vision - from mindspore import dtype as mstype - from mindspore import Model - set_context(mode=GRAPH_MODE, device_target="Ascend", device_id=0) - - model = "mindspore/1.6/mobilenetv2_imagenet2012" - network = mshub.load(model, num_classes=500, include_top=False, activation="Sigmoid") - network.set_train(False) - ``` - -3. 在现有模型结构基础上,增加一个与新任务相关的分类层。 - - ```python - class ReduceMeanFlatten(nn.Cell): - def __init__(self): - super(ReduceMeanFlatten, self).__init__() - self.mean = ops.ReduceMean(keep_dims=True) - self.flatten = nn.Flatten() - - def construct(self, x): - x = self.mean(x, (2, 3)) - x = self.flatten(x) - return x - - # Check MindSpore Hub website to conclude that the last output shape is 1280. - last_channel = 1280 - - # The number of classes in target task is 10. - num_classes = 10 - - reducemean_flatten = ReduceMeanFlatten() - - classification_layer = nn.Dense(last_channel, num_classes) - classification_layer.set_train(True) - - train_network = nn.SequentialCell([network, reducemean_flatten, classification_layer]) - ``` - -4. 定义数据集加载函数。 - - 如下所示,进行微调任务的数据集为[CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html),注意此处需要下载二进制版本(`binary version`)的数据。下载解压后可以通过如下所示代码加载和处理数据。`dataset_path`是数据集的保存路径,由用户给定。 - - ```python - def create_cifar10dataset(dataset_path, batch_size, usage='train', shuffle=True): - data_set = ds.Cifar10Dataset(dataset_dir=dataset_path, usage=usage, shuffle=shuffle) - - # define map operations - trans = [ - vision.Resize((256, 256)), - vision.RandomHorizontalFlip(prob=0.5), - vision.Rescale(1.0 / 255.0, 0.0), - vision.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]), - vision.HWC2CHW() - ] - - type_cast_op = transforms.TypeCast(mstype.int32) - - data_set = data_set.map(operations=type_cast_op, input_columns="label", num_parallel_workers=8) - data_set = data_set.map(operations=trans, input_columns="image", num_parallel_workers=8) - - # apply batch operations - data_set = data_set.batch(batch_size, drop_remainder=True) - return data_set - - # Create Dataset - dataset_path = "/path_to_dataset/cifar-10-batches-bin" - dataset = create_cifar10dataset(dataset_path, batch_size=32, usage='train', shuffle=True) - ``` - -5. 为模型训练选择损失函数、优化器和学习率。 - - ```python - def generate_steps_lr(lr_init, steps_per_epoch, total_epochs): - total_steps = total_epochs * steps_per_epoch - decay_epoch_index = [0.3*total_steps, 0.6*total_steps, 0.8*total_steps] - lr_each_step = [] - for i in range(total_steps): - if i < decay_epoch_index[0]: - lr = lr_init - elif i < decay_epoch_index[1]: - lr = lr_init * 0.1 - elif i < decay_epoch_index[2]: - lr = lr_init * 0.01 - else: - lr = lr_init * 0.001 - lr_each_step.append(lr) - return lr_each_step - - # Set epoch size - epoch_size = 60 - - # Wrap the backbone network with loss. - loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean") - loss_net = nn.WithLossCell(train_network, loss_fn) - steps_per_epoch = dataset.get_dataset_size() - lr = generate_steps_lr(lr_init=0.01, steps_per_epoch=steps_per_epoch, total_epochs=epoch_size) - - # Create an optimizer. - optim = Momentum(filter(lambda x: x.requires_grad, classification_layer.get_parameters()), Tensor(lr, mindspore.float32), 0.9, 4e-5) - train_net = nn.TrainOneStepCell(loss_net, optim) - ``` - -6. 开始重训练。 - - ```python - for epoch in range(epoch_size): - for i, items in enumerate(dataset): - data, label = items - data = mindspore.Tensor(data) - label = mindspore.Tensor(label) - - loss = train_net(data, label) - print(f"epoch: {epoch}/{epoch_size}, loss: {loss}") - # Save the ckpt file for each epoch. - if not os.path.exists('ckpt'): - os.mkdir('ckpt') - ckpt_path = f"./ckpt/cifar10_finetune_epoch{epoch}.ckpt" - save_checkpoint(train_network, ckpt_path) - ``` - -7. 在测试集上测试模型精度。 - - ```python - model = "mindspore/1.6/mobilenetv2_imagenet2012" - - network = mshub.load(model, num_classes=500, pretrained=True, include_top=False, activation="Sigmoid") - network.set_train(False) - reducemean_flatten = ReduceMeanFlatten() - classification_layer = nn.Dense(last_channel, num_classes) - classification_layer.set_train(False) - softmax = nn.Softmax() - network = nn.SequentialCell([network, reducemean_flatten, classification_layer, softmax]) - - # Load a pre-trained ckpt file. - ckpt_path = "./ckpt/cifar10_finetune_epoch59.ckpt" - trained_ckpt = load_checkpoint(ckpt_path) - load_param_into_net(classification_layer, trained_ckpt) - - loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean") - - # Define loss and create model. - eval_dataset = create_cifar10dataset(dataset_path, batch_size=32, do_train=False) - eval_metrics = {'Loss': train.Loss(), - 'Top1-Acc': train.Top1CategoricalAccuracy(), - 'Top5-Acc': train.Top5CategoricalAccuracy()} - model = Model(network, loss_fn=loss, optimizer=None, metrics=eval_metrics) - metrics = model.eval(eval_dataset) - print("metric: ", metrics) - ``` diff --git a/docs/hub/docs/source_zh_cn/publish_model.md b/docs/hub/docs/source_zh_cn/publish_model.md deleted file mode 100644 index 51d3160d5c24a65087a5ff2bb0ba8d1163db2032..0000000000000000000000000000000000000000 --- a/docs/hub/docs/source_zh_cn/publish_model.md +++ /dev/null @@ -1,74 +0,0 @@ -# 发布模型 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/hub/docs/source_zh_cn/publish_model.md) - -## 概述 - -[MindSpore Hub](https://www.mindspore.cn/hub)是存放MindSpore官方或者第三方开发者提供的预训练模型的平台。它向应用开发者提供了简单易用的模型加载和微调APIs,使得用户可以基于预训练模型进行推理或者微调,并部署到自己的应用中。用户也可以将自己训练好的模型按照指定的步骤发布到MindSpore Hub中,以供其他用户进行下载和使用。 - -本教程以GoogleNet为例,对想要将模型发布到MindSpore Hub的模型开发者介绍了模型上传步骤。 - -## 发布模型到MindSpore Hub - -用户可通过向[hub](https://gitee.com/mindspore/hub)仓提交PR的方式向MindSpore Hub发布模型。这里我们以GoogleNet为例,列出模型提交到MindSpore Hub的步骤。 - -1. 将你的预训练模型托管在可以访问的存储位置。 - -2. 参照[模板](https://gitee.com/mindspore/models/blob/master/research/cv/SE_ResNeXt50/mindspore_hub_conf.py),在你自己的代码仓中添加模型生成文件`mindspore_hub_conf.py`,文件放置的位置如下: - - ```text - googlenet - ├── src - │   ├── googlenet.py - ├── script - │   ├── run_train.sh - ├── train.py - ├── test.py - ├── mindspore_hub_conf.py - ``` - -3. 参照[模板](https://gitee.com/mindspore/hub/blob/master/mshub_res/assets/mindspore/1.6/googlenet_cifar10.md#),在`hub/mshub_res/assets/mindspore/1.6`文件夹下创建`{model_name}_{dataset}.md`文件,其中`1.6`为MindSpore的版本号,`hub/mshub_res`的目录结构为: - - ```text - hub - ├── mshub_res - │   ├── assets - │   ├── mindspore - │ ├── 1.6 - │ ├── googlenet_cifar10.md - │   ├── tools - │ ├── get_sha256.py - │ ├── load_markdown.py - │ └── md_validator.py - ``` - - 注意,`{model_name}_{dataset}.md`文件中需要补充如下所示的`file-format`、`asset-link` 和 `asset-sha256`信息,它们分别表示模型文件格式、模型存储位置(步骤1所得)和模型哈希值。 - - ```text - file-format: ckpt - asset-link: https://download.mindspore.cn/models/r1.6/googlenet_ascend_v160_cifar10_official_cv_acc92.53.ckpt - asset-sha256: b2f7fe14782a3ab88ad3534ed5f419b4bbc3b477706258bd6ed8f90f529775e7 - ``` - - 其中,MindSpore Hub支持的模型文件格式有: - - [MindSpore CKPT](https://www.mindspore.cn/tutorials/zh-CN/master/beginner/save_load.html#保存与加载) - - [MINDIR](https://www.mindspore.cn/tutorials/zh-CN/master/beginner/save_load.html#保存和加载mindir) - - [AIR](https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore/mindspore.export.html#mindspore.export) - - [ONNX](https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore/mindspore.export.html#mindspore.export) - - 对于每个预训练模型,执行以下命令,用来获得`.md`文件`asset-sha256`处所需的哈希值,其中`googlenet.ckpt`是从步骤1的存储位置处下载并保存到`tools`文件夹的预训练模型,运行后输出的哈希值为`b2f7fe14782a3ab88ad3534ed5f419b4bbc3b477706258bd6ed8f90f529775e7`。 - - ```bash - cd /hub/mshub_res/tools - python get_sha256.py --file ../googlenet.ckpt - ``` - -4. 使用`hub/mshub_res/tools/md_validator.py`在本地核对`.md`文件的格式,执行以下命令,输出结果为`All Passed`,表示`.md`文件的格式和内容均符合要求。 - - ```bash - python md_validator.py --check_path ../assets/mindspore/1.6/googlenet_cifar10.md - ``` - -5. 在`mindspore/hub`仓创建PR,详细创建方式可以参考[贡献者Wiki](https://gitee.com/mindspore/mindspore/blob/master/CONTRIBUTING.md#)。 - -一旦你的PR合入到`mindspore/hub`的master分支,你的模型将于24小时内在[MindSpore Hub 网站](https://www.mindspore.cn/hub)上显示。有关模型上传的更多详细信息,请参考[README](https://gitee.com/mindspore/hub/blob/master/mshub_res/README.md#)。 diff --git a/docs/probability/docs/Makefile b/docs/probability/docs/Makefile deleted file mode 100644 index c04972c6381abef7fdc66b098d4737a6789d709d..0000000000000000000000000000000000000000 --- a/docs/probability/docs/Makefile +++ /dev/null @@ -1,25 +0,0 @@ -# Minimal makefile for Sphinx documentation -# - -# You can set these variables from the command line, and also -# from the environment for the first two. -SPHINXOPTS ?= -SPHINXBUILD ?= sphinx-build -SOURCEDIR = source_zh_cn -BUILDDIR = build_zh_cn - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: clean - -clean: - -rm -rf $(BUILDDIR)/* $(SOURCEDIR)/nn_probability diff --git a/docs/probability/docs/_ext/customdocumenter.txt b/docs/probability/docs/_ext/customdocumenter.txt deleted file mode 100644 index 2d37ae41f6772a21da2a7dc5c7bff75128e68330..0000000000000000000000000000000000000000 --- a/docs/probability/docs/_ext/customdocumenter.txt +++ /dev/null @@ -1,245 +0,0 @@ -import re -import os -from sphinx.ext.autodoc import Documenter - - -class CustomDocumenter(Documenter): - - def document_members(self, all_members: bool = False) -> None: - """Generate reST for member documentation. - - If *all_members* is True, do all members, else those given by - *self.options.members*. - """ - # set current namespace for finding members - self.env.temp_data['autodoc:module'] = self.modname - if self.objpath: - self.env.temp_data['autodoc:class'] = self.objpath[0] - - want_all = all_members or self.options.inherited_members or \ - self.options.members is ALL - # find out which members are documentable - members_check_module, members = self.get_object_members(want_all) - - # **** 排除已写中文接口名 **** - file_path = os.path.join(self.env.app.srcdir, self.env.docname+'.rst') - exclude_re = re.compile(r'(.. py:class::|.. py:function::)\s+(.*?)(\(|\n)') - includerst_re = re.compile(r'.. include::\s+(.*?)\n') - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - excluded_members = exclude_re.findall(content) - if excluded_members: - excluded_members = [i[1].split('.')[-1] for i in excluded_members] - rst_included = includerst_re.findall(content) - if rst_included: - for i in rst_included: - include_path = os.path.join(os.path.dirname(file_path), i) - if os.path.exists(include_path): - with open(include_path, 'r', encoding='utf8') as g: - content_ = g.read() - excluded_member_ = exclude_re.findall(content_) - if excluded_member_: - excluded_member_ = [j[1].split('.')[-1] for j in excluded_member_] - excluded_members.extend(excluded_member_) - - if excluded_members: - if self.options.exclude_members: - self.options.exclude_members |= set(excluded_members) - else: - self.options.exclude_members = excluded_members - - # remove members given by exclude-members - if self.options.exclude_members: - members = [ - (membername, member) for (membername, member) in members - if ( - self.options.exclude_members is ALL or - membername not in self.options.exclude_members - ) - ] - - # document non-skipped members - memberdocumenters = [] # type: List[Tuple[Documenter, bool]] - for (mname, member, isattr) in self.filter_members(members, want_all): - classes = [cls for cls in self.documenters.values() - if cls.can_document_member(member, mname, isattr, self)] - if not classes: - # don't know how to document this member - continue - # prefer the documenter with the highest priority - classes.sort(key=lambda cls: cls.priority) - # give explicitly separated module name, so that members - # of inner classes can be documented - full_mname = self.modname + '::' + \ - '.'.join(self.objpath + [mname]) - documenter = classes[-1](self.directive, full_mname, self.indent) - memberdocumenters.append((documenter, isattr)) - member_order = self.options.member_order or \ - self.env.config.autodoc_member_order - if member_order == 'groupwise': - # sort by group; relies on stable sort to keep items in the - # same group sorted alphabetically - memberdocumenters.sort(key=lambda e: e[0].member_order) - elif member_order == 'bysource' and self.analyzer: - # sort by source order, by virtue of the module analyzer - tagorder = self.analyzer.tagorder - - def keyfunc(entry: Tuple[Documenter, bool]) -> int: - fullname = entry[0].name.split('::')[1] - return tagorder.get(fullname, len(tagorder)) - memberdocumenters.sort(key=keyfunc) - - for documenter, isattr in memberdocumenters: - documenter.generate( - all_members=True, real_modname=self.real_modname, - check_module=members_check_module and not isattr) - - # reset current objects - self.env.temp_data['autodoc:module'] = None - self.env.temp_data['autodoc:class'] = None - - def generate(self, more_content: Any = None, real_modname: str = None, - check_module: bool = False, all_members: bool = False) -> None: - """Generate reST for the object given by *self.name*, and possibly for - its members. - - If *more_content* is given, include that content. If *real_modname* is - given, use that module name to find attribute docs. If *check_module* is - True, only generate if the object is defined in the module name it is - imported from. If *all_members* is True, document all members. - """ - if not self.parse_name(): - # need a module to import - logger.warning( - __('don\'t know which module to import for autodocumenting ' - '%r (try placing a "module" or "currentmodule" directive ' - 'in the document, or giving an explicit module name)') % - self.name, type='autodoc') - return - - # now, import the module and get object to document - if not self.import_object(): - return - - # If there is no real module defined, figure out which to use. - # The real module is used in the module analyzer to look up the module - # where the attribute documentation would actually be found in. - # This is used for situations where you have a module that collects the - # functions and classes of internal submodules. - self.real_modname = real_modname or self.get_real_modname() # type: str - - # try to also get a source code analyzer for attribute docs - try: - self.analyzer = ModuleAnalyzer.for_module(self.real_modname) - # parse right now, to get PycodeErrors on parsing (results will - # be cached anyway) - self.analyzer.find_attr_docs() - except PycodeError as err: - logger.debug('[autodoc] module analyzer failed: %s', err) - # no source file -- e.g. for builtin and C modules - self.analyzer = None - # at least add the module.__file__ as a dependency - if hasattr(self.module, '__file__') and self.module.__file__: - self.directive.filename_set.add(self.module.__file__) - else: - self.directive.filename_set.add(self.analyzer.srcname) - - # check __module__ of object (for members not given explicitly) - if check_module: - if not self.check_module(): - return - - # document members, if possible - self.document_members(all_members) - - -class ModuleDocumenter(CustomDocumenter): - """ - Specialized Documenter subclass for modules. - """ - objtype = 'module' - content_indent = '' - titles_allowed = True - - option_spec = { - 'members': members_option, 'undoc-members': bool_option, - 'noindex': bool_option, 'inherited-members': bool_option, - 'show-inheritance': bool_option, 'synopsis': identity, - 'platform': identity, 'deprecated': bool_option, - 'member-order': identity, 'exclude-members': members_set_option, - 'private-members': bool_option, 'special-members': members_option, - 'imported-members': bool_option, 'ignore-module-all': bool_option - } # type: Dict[str, Callable] - - def __init__(self, *args: Any) -> None: - super().__init__(*args) - merge_members_option(self.options) - - @classmethod - def can_document_member(cls, member: Any, membername: str, isattr: bool, parent: Any - ) -> bool: - # don't document submodules automatically - return False - - def resolve_name(self, modname: str, parents: Any, path: str, base: Any - ) -> Tuple[str, List[str]]: - if modname is not None: - logger.warning(__('"::" in automodule name doesn\'t make sense'), - type='autodoc') - return (path or '') + base, [] - - def parse_name(self) -> bool: - ret = super().parse_name() - if self.args or self.retann: - logger.warning(__('signature arguments or return annotation ' - 'given for automodule %s') % self.fullname, - type='autodoc') - return ret - - def add_directive_header(self, sig: str) -> None: - Documenter.add_directive_header(self, sig) - - sourcename = self.get_sourcename() - - # add some module-specific options - if self.options.synopsis: - self.add_line(' :synopsis: ' + self.options.synopsis, sourcename) - if self.options.platform: - self.add_line(' :platform: ' + self.options.platform, sourcename) - if self.options.deprecated: - self.add_line(' :deprecated:', sourcename) - - def get_object_members(self, want_all: bool) -> Tuple[bool, List[Tuple[str, object]]]: - if want_all: - if (self.options.ignore_module_all or not - hasattr(self.object, '__all__')): - # for implicit module members, check __module__ to avoid - # documenting imported objects - return True, get_module_members(self.object) - else: - memberlist = self.object.__all__ - # Sometimes __all__ is broken... - if not isinstance(memberlist, (list, tuple)) or not \ - all(isinstance(entry, str) for entry in memberlist): - logger.warning( - __('__all__ should be a list of strings, not %r ' - '(in module %s) -- ignoring __all__') % - (memberlist, self.fullname), - type='autodoc' - ) - # fall back to all members - return True, get_module_members(self.object) - else: - memberlist = self.options.members or [] - ret = [] - for mname in memberlist: - try: - ret.append((mname, safe_getattr(self.object, mname))) - except AttributeError: - logger.warning( - __('missing attribute mentioned in :members: or __all__: ' - 'module %s, attribute %s') % - (safe_getattr(self.object, '__name__', '???'), mname), - type='autodoc' - ) - return False, ret diff --git a/docs/probability/docs/_ext/myautosummary.py b/docs/probability/docs/_ext/myautosummary.py deleted file mode 100644 index 0538bda06871161fff34ba331b82448a0c87da9f..0000000000000000000000000000000000000000 --- a/docs/probability/docs/_ext/myautosummary.py +++ /dev/null @@ -1,519 +0,0 @@ -"""Customized autosummary directives for sphinx.""" -import os -import re -import inspect -import importlib -from typing import List, Tuple -from docutils.nodes import Node -from sphinx.locale import __ -from sphinx.ext.autosummary import Autosummary, posixpath, addnodes, logger, Matcher, autosummary_toc, get_import_prefixes_from_env -from sphinx.ext.autosummary import mock, StringList, ModuleType, get_documenter, ModuleAnalyzer, PycodeError, mangle_signature -from sphinx.ext.autosummary import import_by_name, extract_summary, autosummary_table, nodes, switch_source_input, rst -from sphinx.ext.autodoc.directive import DocumenterBridge, Options - - -class MsAutosummary(Autosummary): - """ - Inherited from sphinx's autosummary, add titles and a column for the generated table. - """ - - def init(self): - """ - init method - """ - self.find_doc_name = "" - self.third_title = "" - self.default_doc = "" - - def extract_env_summary(self, doc: List[str]) -> str: - """Extract env summary from docstring.""" - env_sum = self.default_doc - for i, piece in enumerate(doc): - if piece.startswith(self.find_doc_name): - env_sum = doc[i+1][4:] - return env_sum - - def run(self): - """ - run method - """ - self.init() - self.bridge = DocumenterBridge(self.env, self.state.document.reporter, - Options(), self.lineno, self.state) - - names = [x.strip().split()[0] for x in self.content - if x.strip() and re.search(r'^[~a-zA-Z_]', x.strip()[0])] - items = self.get_items(names) - teble_nodes = self.get_table(items) - - if 'toctree' in self.options: - dirname = posixpath.dirname(self.env.docname) - - tree_prefix = self.options['toctree'].strip() - docnames = [] - excluded = Matcher(self.config.exclude_patterns) - for item in items: - docname = posixpath.join(tree_prefix, item[3]) - docname = posixpath.normpath(posixpath.join(dirname, docname)) - if docname not in self.env.found_docs: - location = self.state_machine.get_source_and_line(self.lineno) - if excluded(self.env.doc2path(docname, None)): - msg = __('autosummary references excluded document %r. Ignored.') - else: - msg = __('autosummary: stub file not found %r. ' - 'Check your autosummary_generate setting.') - logger.warning(msg, item[3], location=location) - continue - docnames.append(docname) - - if docnames: - tocnode = addnodes.toctree() - tocnode['includefiles'] = docnames - tocnode['entries'] = [(None, docn) for docn in docnames] - tocnode['maxdepth'] = -1 - tocnode['glob'] = None - teble_nodes.append(autosummary_toc('', '', tocnode)) - return teble_nodes - - def get_items(self, names: List[str]) -> List[Tuple[str, str, str, str, str]]: - """Try to import the given names, and return a list of - ``[(name, signature, summary_string, real_name, env_summary), ...]``. - """ - prefixes = get_import_prefixes_from_env(self.env) - items = [] # type: List[Tuple[str, str, str, str, str]] - max_item_chars = 50 - - for name in names: - display_name = name - if name.startswith('~'): - name = name[1:] - display_name = name.split('.')[-1] - try: - with mock(self.config.autosummary_mock_imports): - real_name, obj, parent, modname = import_by_name(name, prefixes=prefixes) - except ImportError: - logger.warning(__('failed to import %s'), name) - items.append((name, '', '', name, '')) - continue - - self.bridge.result = StringList() # initialize for each documenter - full_name = real_name - if not isinstance(obj, ModuleType): - # give explicitly separated module name, so that members - # of inner classes can be documented - full_name = modname + '::' + full_name[len(modname) + 1:] - # NB. using full_name here is important, since Documenters - # handle module prefixes slightly differently - doccls = get_documenter(self.env.app, obj, parent) - documenter = doccls(self.bridge, full_name) - - if not documenter.parse_name(): - logger.warning(__('failed to parse name %s'), real_name) - items.append((display_name, '', '', real_name, '')) - continue - if not documenter.import_object(): - logger.warning(__('failed to import object %s'), real_name) - items.append((display_name, '', '', real_name, '')) - continue - if documenter.options.members and not documenter.check_module(): - continue - - # try to also get a source code analyzer for attribute docs - try: - documenter.analyzer = ModuleAnalyzer.for_module( - documenter.get_real_modname()) - # parse right now, to get PycodeErrors on parsing (results will - # be cached anyway) - documenter.analyzer.find_attr_docs() - except PycodeError as err: - logger.debug('[autodoc] module analyzer failed: %s', err) - # no source file -- e.g. for builtin and C modules - documenter.analyzer = None - - # -- Grab the signature - - try: - sig = documenter.format_signature(show_annotation=False) - except TypeError: - # the documenter does not support ``show_annotation`` option - sig = documenter.format_signature() - - if not sig: - sig = '' - else: - max_chars = max(10, max_item_chars - len(display_name)) - sig = mangle_signature(sig, max_chars=max_chars) - - # -- Grab the summary - - documenter.add_content(None) - summary = extract_summary(self.bridge.result.data[:], self.state.document) - env_sum = self.extract_env_summary(self.bridge.result.data[:]) - items.append((display_name, sig, summary, real_name, env_sum)) - - return items - - def get_table(self, items: List[Tuple[str, str, str, str, str]]) -> List[Node]: - """Generate a proper list of table nodes for autosummary:: directive. - - *items* is a list produced by :meth:`get_items`. - """ - table_spec = addnodes.tabular_col_spec() - table_spec['spec'] = r'\X{1}{2}\X{1}{2}' - - table = autosummary_table('') - real_table = nodes.table('', classes=['longtable']) - table.append(real_table) - group = nodes.tgroup('', cols=3) - real_table.append(group) - group.append(nodes.colspec('', colwidth=10)) - group.append(nodes.colspec('', colwidth=70)) - group.append(nodes.colspec('', colwidth=30)) - body = nodes.tbody('') - group.append(body) - - def append_row(*column_texts: str) -> None: - row = nodes.row('', color="red") - source, line = self.state_machine.get_source_and_line() - for text in column_texts: - node = nodes.paragraph('') - vl = StringList() - vl.append(text, '%s:%d:' % (source, line)) - with switch_source_input(self.state, vl): - self.state.nested_parse(vl, 0, node) - try: - if isinstance(node[0], nodes.paragraph): - node = node[0] - except IndexError: - pass - row.append(nodes.entry('', node)) - body.append(row) - - # add table's title - append_row("**API Name**", "**Description**", self.third_title) - for name, sig, summary, real_name, env_sum in items: - qualifier = 'obj' - if 'nosignatures' not in self.options: - col1 = ':%s:`%s <%s>`\\ %s' % (qualifier, name, real_name, rst.escape(sig)) - else: - col1 = ':%s:`%s <%s>`' % (qualifier, name, real_name) - col2 = summary - col3 = env_sum - append_row(col1, col2, col3) - - return [table_spec, table] - - -class MsNoteAutoSummary(MsAutosummary): - """ - Inherited from MsAutosummary. Add a third column about `Note` to the table. - """ - - def init(self): - """ - init method - """ - self.find_doc_name = ".. note::" - self.third_title = "**Note**" - self.default_doc = "None" - - def extract_env_summary(self, doc: List[str]) -> str: - """Extract env summary from docstring.""" - env_sum = self.default_doc - for piece in doc: - if piece.startswith(self.find_doc_name): - env_sum = piece[10:] - return env_sum - -class MsPlatformAutoSummary(MsAutosummary): - """ - Inherited from MsAutosummary. Add a third column about `Supported Platforms` to the table. - """ - def init(self): - """ - init method - """ - self.find_doc_name = "Supported Platforms:" - self.third_title = "**{}**".format(self.find_doc_name[:-1]) - self.default_doc = "To Be Developed" - -class MsCnAutoSummary(Autosummary): - """Overwrite MsPlatformAutosummary for chinese python api.""" - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.table_head = () - self.find_doc_name = "" - self.third_title = "" - self.default_doc = "" - self.third_name_en = "" - - def get_third_column_en(self, doc): - """Get the third column for en.""" - third_column = self.default_doc - for i, piece in enumerate(doc): - if piece.startswith(self.third_name_en): - try: - if "eprecated" in doc[i+1][4:]: - third_column = "弃用" - else: - third_column = doc[i+1][4:] - except IndexError: - third_column = '' - return third_column - - def get_summary_re(self, display_name: str): - return re.compile(rf'\.\. \w+:\w+::\s+{display_name}.*?\n\n\s+(.*?)[。\n]') - - def run(self) -> List[Node]: - self.bridge = DocumenterBridge(self.env, self.state.document.reporter, - Options(), self.lineno, self.state) - - names = [x.strip().split()[0] for x in self.content - if x.strip() and re.search(r'^[~a-zA-Z_]', x.strip()[0])] - items = self.get_items(names) - #pylint: disable=redefined-outer-name - nodes = self.get_table(items) - - dirname = posixpath.dirname(self.env.docname) - - tree_prefix = self.options['toctree'].strip() - docnames = [] - names = [i[0] for i in items] - for name in names: - docname = posixpath.join(tree_prefix, name) - docname = posixpath.normpath(posixpath.join(dirname, docname)) - if docname not in self.env.found_docs: - continue - - docnames.append(docname) - - if docnames: - tocnode = addnodes.toctree() - tocnode['includefiles'] = docnames - tocnode['entries'] = [(None, docn) for docn in docnames] - tocnode['maxdepth'] = -1 - tocnode['glob'] = None - - nodes.append(autosummary_toc('', '', tocnode)) - - return nodes - - def get_items(self, names: List[str]) -> List[Tuple[str, str, str, str]]: - """Try to import the given names, and return a list of - ``[(name, signature, summary_string, real_name), ...]``. - """ - prefixes = get_import_prefixes_from_env(self.env) - doc_path = os.path.dirname(self.state.document.current_source) - items = [] # type: List[Tuple[str, str, str, str]] - max_item_chars = 50 - origin_rst_files = self.env.config.rst_files - all_rst_files = self.env.found_docs - generated_files = all_rst_files.difference(origin_rst_files) - - for name in names: - display_name = name - if name.startswith('~'): - name = name[1:] - display_name = name.split('.')[-1] - - dir_name = self.options['toctree'] - spec_path = os.path.join('api_python', dir_name, display_name) - file_path = os.path.join(doc_path, dir_name, display_name+'.rst') - if os.path.exists(file_path) and spec_path not in generated_files: - summary_re = self.get_summary_re(display_name) - content = '' - with open(os.path.join(doc_path, dir_name, display_name+'.rst'), 'r', encoding='utf-8') as f: - content = f.read() - if content: - summary_str = summary_re.findall(content) - if summary_str: - if re.findall("[::,,。.;;]", summary_str[0][-1]): - logger.warning(f"{display_name}接口的概述格式需调整") - summary_str = summary_str[0] + '。' - else: - summary_str = '' - if not self.table_head: - items.append((display_name, summary_str)) - else: - third_str = self.get_third_column(display_name, content) - if third_str: - third_str = third_str[0] - else: - third_str = '' - - items.append((display_name, summary_str, third_str)) - else: - try: - with mock(self.config.autosummary_mock_imports): - real_name, obj, parent, modname = import_by_name(name, prefixes=prefixes) - except ImportError: - logger.warning(__('failed to import %s'), name) - items.append((name, '', '')) - continue - - self.bridge.result = StringList() # initialize for each documenter - full_name = real_name - if not isinstance(obj, ModuleType): - # give explicitly separated module name, so that members - # of inner classes can be documented - full_name = modname + '::' + full_name[len(modname) + 1:] - # NB. using full_name here is important, since Documenters - # handle module prefixes slightly differently - doccls = get_documenter(self.env.app, obj, parent) - documenter = doccls(self.bridge, full_name) - - if not documenter.parse_name(): - logger.warning(__('failed to parse name %s'), real_name) - items.append((display_name, '', '')) - continue - if not documenter.import_object(): - logger.warning(__('failed to import object %s'), real_name) - items.append((display_name, '', '')) - continue - if documenter.options.members and not documenter.check_module(): - continue - - # try to also get a source code analyzer for attribute docs - try: - documenter.analyzer = ModuleAnalyzer.for_module( - documenter.get_real_modname()) - # parse right now, to get PycodeErrors on parsing (results will - # be cached anyway) - documenter.analyzer.find_attr_docs() - except PycodeError as err: - logger.debug('[autodoc] module analyzer failed: %s', err) - # no source file -- e.g. for builtin and C modules - documenter.analyzer = None - - # -- Grab the signature - - try: - sig = documenter.format_signature(show_annotation=False) - except TypeError: - # the documenter does not support ``show_annotation`` option - sig = documenter.format_signature() - - if not sig: - sig = '' - else: - max_chars = max(10, max_item_chars - len(display_name)) - sig = mangle_signature(sig, max_chars=max_chars) - - # -- Grab the summary and third_colum - - documenter.add_content(None) - summary = extract_summary(self.bridge.result.data[:], self.state.document) - if self.table_head: - third_colum = self.get_third_column_en(self.bridge.result.data[:]) - items.append((display_name, summary, third_colum)) - else: - items.append((display_name, summary)) - - - return items - - def get_table(self, items: List[Tuple[str, str, str]]) -> List[Node]: - """Generate a proper list of table nodes for autosummary:: directive. - - *items* is a list produced by :meth:`get_items`. - """ - table_spec = addnodes.tabular_col_spec() - table = autosummary_table('') - real_table = nodes.table('', classes=['longtable']) - table.append(real_table) - - if not self.table_head: - table_spec['spec'] = r'\X{1}{2}\X{1}{2}' - group = nodes.tgroup('', cols=2) - real_table.append(group) - group.append(nodes.colspec('', colwidth=10)) - group.append(nodes.colspec('', colwidth=90)) - else: - table_spec['spec'] = r'\X{1}{2}\X{1}{2}\X{1}{2}' - group = nodes.tgroup('', cols=3) - real_table.append(group) - group.append(nodes.colspec('', colwidth=10)) - group.append(nodes.colspec('', colwidth=60)) - group.append(nodes.colspec('', colwidth=30)) - body = nodes.tbody('') - group.append(body) - - def append_row(*column_texts: str) -> None: - row = nodes.row('') - source, line = self.state_machine.get_source_and_line() - for text in column_texts: - node = nodes.paragraph('') - vl = StringList() - vl.append(text, '%s:%d:' % (source, line)) - with switch_source_input(self.state, vl): - self.state.nested_parse(vl, 0, node) - try: - if isinstance(node[0], nodes.paragraph): - node = node[0] - except IndexError: - pass - row.append(nodes.entry('', node)) - body.append(row) - append_row(*self.table_head) - if not self.table_head: - for name, summary in items: - qualifier = 'obj' - col1 = ':%s:`%s <%s>`' % (qualifier, name, name) - col2 = summary - append_row(col1, col2) - else: - for name, summary, other in items: - qualifier = 'obj' - col1 = ':%s:`%s <%s>`' % (qualifier, name, name) - col2 = summary - col3 = other - append_row(col1, col2, col3) - return [table_spec, table] - -def get_api(fullname): - """Get the api module.""" - try: - module_name, api_name = ".".join(fullname.split('.')[:-1]), fullname.split('.')[-1] - # pylint: disable=unused-variable - module_import = importlib.import_module(module_name) - except ModuleNotFoundError: - module_name, api_name = ".".join(fullname.split('.')[:-2]), ".".join(fullname.split('.')[-2:]) - module_import = importlib.import_module(module_name) - # pylint: disable=eval-used - api = getattr(module_import, api_name) - return api - -class MsCnPlatformAutoSummary(MsCnAutoSummary): - """definition of cnmsplatformautosummary.""" - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.table_head = ('**接口名**', '**概述**', '**支持平台**') - self.third_name_en = "Supported Platforms:" - - def get_third_column(self, name=None, content=None): - """Get the`Supported Platforms`.""" - if not name: - return [] - try: - api_doc = inspect.getdoc(get_api(name)) - example_str = re.findall(r'Supported Platforms:\n\s+(.*?)\n\n', api_doc) - if not example_str: - example_str_leak = re.findall(r'Supported Platforms:\n\s+(.*)', api_doc) - if example_str_leak: - return example_str_leak - return ["开发中"] - return example_str - except: #pylint: disable=bare-except - return [] - -class MsCnNoteAutoSummary(MsCnAutoSummary): - """definition of cnmsnoteautosummary.""" - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.table_head = ('**接口名**', '**概述**', '**说明**') - self.third_name_en = ".. note::" - - def get_third_column(self, name=None, content=''): - note_re = re.compile(r'\.\. note::\n{,2}\s+(.*?)[。\n]') - third_str = note_re.findall(content) - return third_str diff --git a/docs/probability/docs/_ext/overwriteautosummary_generate.txt b/docs/probability/docs/_ext/overwriteautosummary_generate.txt deleted file mode 100644 index 4b0a1b1dd2b410ecab971b13da9993c90d65ef0d..0000000000000000000000000000000000000000 --- a/docs/probability/docs/_ext/overwriteautosummary_generate.txt +++ /dev/null @@ -1,707 +0,0 @@ -""" - sphinx.ext.autosummary.generate - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Usable as a library or script to generate automatic RST source files for - items referred to in autosummary:: directives. - - Each generated RST file contains a single auto*:: directive which - extracts the docstring of the referred item. - - Example Makefile rule:: - - generate: - sphinx-autogen -o source/generated source/*.rst - - :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import argparse -import importlib -import inspect -import locale -import os -import pkgutil -import pydoc -import re -import sys -import warnings -from gettext import NullTranslations -from os import path -from typing import Any, Dict, List, NamedTuple, Sequence, Set, Tuple, Type, Union - -from jinja2 import TemplateNotFound -from jinja2.sandbox import SandboxedEnvironment - -import sphinx.locale -from sphinx import __display_version__, package_dir -from sphinx.application import Sphinx -from sphinx.builders import Builder -from sphinx.config import Config -from sphinx.deprecation import RemovedInSphinx50Warning -from sphinx.ext.autodoc import Documenter -from sphinx.ext.autodoc.importer import import_module -from sphinx.ext.autosummary import (ImportExceptionGroup, get_documenter, import_by_name, - import_ivar_by_name) -from sphinx.locale import __ -from sphinx.pycode import ModuleAnalyzer, PycodeError -from sphinx.registry import SphinxComponentRegistry -from sphinx.util import logging, rst, split_full_qualified_name, get_full_modname -from sphinx.util.inspect import getall, safe_getattr -from sphinx.util.osutil import ensuredir -from sphinx.util.template import SphinxTemplateLoader - -logger = logging.getLogger(__name__) - - -class DummyApplication: - """Dummy Application class for sphinx-autogen command.""" - - def __init__(self, translator: NullTranslations) -> None: - self.config = Config() - self.registry = SphinxComponentRegistry() - self.messagelog: List[str] = [] - self.srcdir = "/" - self.translator = translator - self.verbosity = 0 - self._warncount = 0 - self.warningiserror = False - - self.config.add('autosummary_context', {}, True, None) - self.config.add('autosummary_filename_map', {}, True, None) - self.config.add('autosummary_ignore_module_all', True, 'env', bool) - self.config.add('docs_branch', '', True, None) - self.config.add('branch', '', True, None) - self.config.add('cst_module_name', '', True, None) - self.config.add('copy_repo', '', True, None) - self.config.add('giturl', '', True, None) - self.config.add('repo_whl', '', True, None) - self.config.init_values() - - def emit_firstresult(self, *args: Any) -> None: - pass - - -class AutosummaryEntry(NamedTuple): - name: str - path: str - template: str - recursive: bool - - -def setup_documenters(app: Any) -> None: - from sphinx.ext.autodoc import (AttributeDocumenter, ClassDocumenter, DataDocumenter, - DecoratorDocumenter, ExceptionDocumenter, - FunctionDocumenter, MethodDocumenter, ModuleDocumenter, - NewTypeAttributeDocumenter, NewTypeDataDocumenter, - PropertyDocumenter) - documenters: List[Type[Documenter]] = [ - ModuleDocumenter, ClassDocumenter, ExceptionDocumenter, DataDocumenter, - FunctionDocumenter, MethodDocumenter, NewTypeAttributeDocumenter, - NewTypeDataDocumenter, AttributeDocumenter, DecoratorDocumenter, PropertyDocumenter, - ] - for documenter in documenters: - app.registry.add_documenter(documenter.objtype, documenter) - - -def _simple_info(msg: str) -> None: - warnings.warn('_simple_info() is deprecated.', - RemovedInSphinx50Warning, stacklevel=2) - print(msg) - - -def _simple_warn(msg: str) -> None: - warnings.warn('_simple_warn() is deprecated.', - RemovedInSphinx50Warning, stacklevel=2) - print('WARNING: ' + msg, file=sys.stderr) - - -def _underline(title: str, line: str = '=') -> str: - if '\n' in title: - raise ValueError('Can only underline single lines') - return title + '\n' + line * len(title) - - -class AutosummaryRenderer: - """A helper class for rendering.""" - - def __init__(self, app: Union[Builder, Sphinx], template_dir: str = None) -> None: - if isinstance(app, Builder): - warnings.warn('The first argument for AutosummaryRenderer has been ' - 'changed to Sphinx object', - RemovedInSphinx50Warning, stacklevel=2) - if template_dir: - warnings.warn('template_dir argument for AutosummaryRenderer is deprecated.', - RemovedInSphinx50Warning, stacklevel=2) - - system_templates_path = [os.path.join(package_dir, 'ext', 'autosummary', 'templates')] - loader = SphinxTemplateLoader(app.srcdir, app.config.templates_path, - system_templates_path) - - self.env = SandboxedEnvironment(loader=loader) - self.env.filters['escape'] = rst.escape - self.env.filters['e'] = rst.escape - self.env.filters['underline'] = _underline - - if isinstance(app, (Sphinx, DummyApplication)): - if app.translator: - self.env.add_extension("jinja2.ext.i18n") - self.env.install_gettext_translations(app.translator) - elif isinstance(app, Builder): - if app.app.translator: - self.env.add_extension("jinja2.ext.i18n") - self.env.install_gettext_translations(app.app.translator) - - def exists(self, template_name: str) -> bool: - """Check if template file exists.""" - warnings.warn('AutosummaryRenderer.exists() is deprecated.', - RemovedInSphinx50Warning, stacklevel=2) - try: - self.env.get_template(template_name) - return True - except TemplateNotFound: - return False - - def render(self, template_name: str, context: Dict) -> str: - """Render a template file.""" - try: - template = self.env.get_template(template_name) - except TemplateNotFound: - try: - # objtype is given as template_name - template = self.env.get_template('autosummary/%s.rst' % template_name) - except TemplateNotFound: - # fallback to base.rst - template = self.env.get_template('autosummary/base.rst') - - return template.render(context) - - -# -- Generating output --------------------------------------------------------- - - -class ModuleScanner: - def __init__(self, app: Any, obj: Any) -> None: - self.app = app - self.object = obj - - def get_object_type(self, name: str, value: Any) -> str: - return get_documenter(self.app, value, self.object).objtype - - def is_skipped(self, name: str, value: Any, objtype: str) -> bool: - try: - return self.app.emit_firstresult('autodoc-skip-member', objtype, - name, value, False, {}) - except Exception as exc: - logger.warning(__('autosummary: failed to determine %r to be documented, ' - 'the following exception was raised:\n%s'), - name, exc, type='autosummary') - return False - - def scan(self, imported_members: bool) -> List[str]: - members = [] - for name in members_of(self.object, self.app.config): - try: - value = safe_getattr(self.object, name) - except AttributeError: - value = None - - objtype = self.get_object_type(name, value) - if self.is_skipped(name, value, objtype): - continue - - try: - if inspect.ismodule(value): - imported = True - elif safe_getattr(value, '__module__') != self.object.__name__: - imported = True - else: - imported = False - except AttributeError: - imported = False - - respect_module_all = not self.app.config.autosummary_ignore_module_all - if imported_members: - # list all members up - members.append(name) - elif imported is False: - # list not-imported members - members.append(name) - elif '__all__' in dir(self.object) and respect_module_all: - # list members that have __all__ set - members.append(name) - - return members - - -def members_of(obj: Any, conf: Config) -> Sequence[str]: - """Get the members of ``obj``, possibly ignoring the ``__all__`` module attribute - - Follows the ``conf.autosummary_ignore_module_all`` setting.""" - - if conf.autosummary_ignore_module_all: - return dir(obj) - else: - return getall(obj) or dir(obj) - - -def generate_autosummary_content(name: str, obj: Any, parent: Any, - template: AutosummaryRenderer, template_name: str, - imported_members: bool, app: Any, - recursive: bool, context: Dict, - modname: str = None, qualname: str = None) -> str: - doc = get_documenter(app, obj, parent) - - def skip_member(obj: Any, name: str, objtype: str) -> bool: - try: - return app.emit_firstresult('autodoc-skip-member', objtype, name, - obj, False, {}) - except Exception as exc: - logger.warning(__('autosummary: failed to determine %r to be documented, ' - 'the following exception was raised:\n%s'), - name, exc, type='autosummary') - return False - - def get_class_members(obj: Any) -> Dict[str, Any]: - members = sphinx.ext.autodoc.get_class_members(obj, [qualname], safe_getattr) - return {name: member.object for name, member in members.items()} - - def get_module_members(obj: Any) -> Dict[str, Any]: - members = {} - for name in members_of(obj, app.config): - try: - members[name] = safe_getattr(obj, name) - except AttributeError: - continue - return members - - def get_all_members(obj: Any) -> Dict[str, Any]: - if doc.objtype == "module": - return get_module_members(obj) - elif doc.objtype == "class": - return get_class_members(obj) - return {} - - def get_members(obj: Any, types: Set[str], include_public: List[str] = [], - imported: bool = True) -> Tuple[List[str], List[str]]: - items: List[str] = [] - public: List[str] = [] - - all_members = get_all_members(obj) - for name, value in all_members.items(): - documenter = get_documenter(app, value, obj) - if documenter.objtype in types: - # skip imported members if expected - if imported or getattr(value, '__module__', None) == obj.__name__: - skipped = skip_member(value, name, documenter.objtype) - if skipped is True: - pass - elif skipped is False: - # show the member forcedly - items.append(name) - public.append(name) - else: - items.append(name) - if name in include_public or not name.startswith('_'): - # considers member as public - public.append(name) - return public, items - - def get_module_attrs(members: Any) -> Tuple[List[str], List[str]]: - """Find module attributes with docstrings.""" - attrs, public = [], [] - try: - analyzer = ModuleAnalyzer.for_module(name) - attr_docs = analyzer.find_attr_docs() - for namespace, attr_name in attr_docs: - if namespace == '' and attr_name in members: - attrs.append(attr_name) - if not attr_name.startswith('_'): - public.append(attr_name) - except PycodeError: - pass # give up if ModuleAnalyzer fails to parse code - return public, attrs - - def get_modules(obj: Any) -> Tuple[List[str], List[str]]: - items: List[str] = [] - for _, modname, _ispkg in pkgutil.iter_modules(obj.__path__): - fullname = name + '.' + modname - try: - module = import_module(fullname) - if module and hasattr(module, '__sphinx_mock__'): - continue - except ImportError: - pass - - items.append(fullname) - public = [x for x in items if not x.split('.')[-1].startswith('_')] - return public, items - - ns: Dict[str, Any] = {} - ns.update(context) - - if doc.objtype == 'module': - scanner = ModuleScanner(app, obj) - ns['members'] = scanner.scan(imported_members) - ns['functions'], ns['all_functions'] = \ - get_members(obj, {'function'}, imported=imported_members) - ns['classes'], ns['all_classes'] = \ - get_members(obj, {'class'}, imported=imported_members) - ns['exceptions'], ns['all_exceptions'] = \ - get_members(obj, {'exception'}, imported=imported_members) - ns['attributes'], ns['all_attributes'] = \ - get_module_attrs(ns['members']) - ispackage = hasattr(obj, '__path__') - if ispackage and recursive: - ns['modules'], ns['all_modules'] = get_modules(obj) - elif doc.objtype == 'class': - ns['members'] = dir(obj) - ns['inherited_members'] = \ - set(dir(obj)) - set(obj.__dict__.keys()) - ns['methods'], ns['all_methods'] = \ - get_members(obj, {'method'}, ['__init__']) - ns['attributes'], ns['all_attributes'] = \ - get_members(obj, {'attribute', 'property'}) - - if modname is None or qualname is None: - modname, qualname = split_full_qualified_name(name) - - if doc.objtype in ('method', 'attribute', 'property'): - ns['class'] = qualname.rsplit(".", 1)[0] - - if doc.objtype in ('class',): - shortname = qualname - else: - shortname = qualname.rsplit(".", 1)[-1] - - ns['fullname'] = name - ns['module'] = modname - ns['objname'] = qualname - ns['name'] = shortname - - ns['objtype'] = doc.objtype - ns['underline'] = len(name) * '=' - - if template_name: - return template.render(template_name, ns) - else: - return template.render(doc.objtype, ns) - - -def generate_autosummary_docs(sources: List[str], output_dir: str = None, - suffix: str = '.rst', base_path: str = None, - builder: Builder = None, template_dir: str = None, - imported_members: bool = False, app: Any = None, - overwrite: bool = True, encoding: str = 'utf-8') -> None: - - if builder: - warnings.warn('builder argument for generate_autosummary_docs() is deprecated.', - RemovedInSphinx50Warning, stacklevel=2) - - if template_dir: - warnings.warn('template_dir argument for generate_autosummary_docs() is deprecated.', - RemovedInSphinx50Warning, stacklevel=2) - - showed_sources = list(sorted(sources)) - if len(showed_sources) > 20: - showed_sources = showed_sources[:10] + ['...'] + showed_sources[-10:] - logger.info(__('[autosummary] generating autosummary for: %s') % - ', '.join(showed_sources)) - - if output_dir: - logger.info(__('[autosummary] writing to %s') % output_dir) - - if base_path is not None: - sources = [os.path.join(base_path, filename) for filename in sources] - - template = AutosummaryRenderer(app) - - # read - items = find_autosummary_in_files(sources) - - # keep track of new files - new_files = [] - - if app: - filename_map = app.config.autosummary_filename_map - else: - filename_map = {} - - # write - for entry in sorted(set(items), key=str): - if entry.path is None: - # The corresponding autosummary:: directive did not have - # a :toctree: option - continue - - path = output_dir or os.path.abspath(entry.path) - ensuredir(path) - - try: - name, obj, parent, modname = import_by_name(entry.name, grouped_exception=True) - qualname = name.replace(modname + ".", "") - except ImportExceptionGroup as exc: - try: - # try to import as an instance attribute - name, obj, parent, modname = import_ivar_by_name(entry.name) - qualname = name.replace(modname + ".", "") - except ImportError as exc2: - if exc2.__cause__: - exceptions: List[BaseException] = exc.exceptions + [exc2.__cause__] - else: - exceptions = exc.exceptions + [exc2] - - errors = list(set("* %s: %s" % (type(e).__name__, e) for e in exceptions)) - logger.warning(__('[autosummary] failed to import %s.\nPossible hints:\n%s'), - entry.name, '\n'.join(errors)) - continue - - context: Dict[str, Any] = {} - if app: - context.update(app.config.autosummary_context) - - content = generate_autosummary_content(name, obj, parent, template, entry.template, - imported_members, app, entry.recursive, context, - modname, qualname) - try: - py_source_rel = get_full_modname(modname, qualname).replace('.', '/') + '.py' - except: - logger.warning(name) - py_source_rel = '' - - re_view = f"\n.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/{app.config.docs_branch}/" + \ - f"resource/_static/logo_source_en.svg\n :target: " + app.config.giturl + \ - f"{app.config.copy_repo}/blob/{app.config.branch}/" + app.config.repo_whl + \ - py_source_rel.split(app.config.cst_module_name)[-1] + '\n :alt: View Source On Gitee\n\n' - - if re_view not in content and py_source_rel: - content = re.sub('([=]{5,})\n', r'\1\n' + re_view, content, 1) - filename = os.path.join(path, filename_map.get(name, name) + suffix) - if os.path.isfile(filename): - with open(filename, encoding=encoding) as f: - old_content = f.read() - - if content == old_content: - continue - elif overwrite: # content has changed - with open(filename, 'w', encoding=encoding) as f: - f.write(content) - new_files.append(filename) - else: - with open(filename, 'w', encoding=encoding) as f: - f.write(content) - new_files.append(filename) - - # descend recursively to new files - if new_files: - generate_autosummary_docs(new_files, output_dir=output_dir, - suffix=suffix, base_path=base_path, - builder=builder, template_dir=template_dir, - imported_members=imported_members, app=app, - overwrite=overwrite) - - -# -- Finding documented entries in files --------------------------------------- - -def find_autosummary_in_files(filenames: List[str]) -> List[AutosummaryEntry]: - """Find out what items are documented in source/*.rst. - - See `find_autosummary_in_lines`. - """ - documented: List[AutosummaryEntry] = [] - for filename in filenames: - with open(filename, encoding='utf-8', errors='ignore') as f: - lines = f.read().splitlines() - documented.extend(find_autosummary_in_lines(lines, filename=filename)) - return documented - - -def find_autosummary_in_docstring(name: str, module: str = None, filename: str = None - ) -> List[AutosummaryEntry]: - """Find out what items are documented in the given object's docstring. - - See `find_autosummary_in_lines`. - """ - if module: - warnings.warn('module argument for find_autosummary_in_docstring() is deprecated.', - RemovedInSphinx50Warning, stacklevel=2) - - try: - real_name, obj, parent, modname = import_by_name(name, grouped_exception=True) - lines = pydoc.getdoc(obj).splitlines() - return find_autosummary_in_lines(lines, module=name, filename=filename) - except AttributeError: - pass - except ImportExceptionGroup as exc: - errors = list(set("* %s: %s" % (type(e).__name__, e) for e in exc.exceptions)) - print('Failed to import %s.\nPossible hints:\n%s' % (name, '\n'.join(errors))) - except SystemExit: - print("Failed to import '%s'; the module executes module level " - "statement and it might call sys.exit()." % name) - return [] - - -def find_autosummary_in_lines(lines: List[str], module: str = None, filename: str = None - ) -> List[AutosummaryEntry]: - """Find out what items appear in autosummary:: directives in the - given lines. - - Returns a list of (name, toctree, template) where *name* is a name - of an object and *toctree* the :toctree: path of the corresponding - autosummary directive (relative to the root of the file name), and - *template* the value of the :template: option. *toctree* and - *template* ``None`` if the directive does not have the - corresponding options set. - """ - autosummary_re = re.compile(r'^(\s*)\.\.\s+(ms[a-z]*)?autosummary::\s*') - automodule_re = re.compile( - r'^\s*\.\.\s+automodule::\s*([A-Za-z0-9_.]+)\s*$') - module_re = re.compile( - r'^\s*\.\.\s+(current)?module::\s*([a-zA-Z0-9_.]+)\s*$') - autosummary_item_re = re.compile(r'^\s+(~?[_a-zA-Z][a-zA-Z0-9_.]*)\s*.*?') - recursive_arg_re = re.compile(r'^\s+:recursive:\s*$') - toctree_arg_re = re.compile(r'^\s+:toctree:\s*(.*?)\s*$') - template_arg_re = re.compile(r'^\s+:template:\s*(.*?)\s*$') - - documented: List[AutosummaryEntry] = [] - - recursive = False - toctree: str = None - template = None - current_module = module - in_autosummary = False - base_indent = "" - - for line in lines: - if in_autosummary: - m = recursive_arg_re.match(line) - if m: - recursive = True - continue - - m = toctree_arg_re.match(line) - if m: - toctree = m.group(1) - if filename: - toctree = os.path.join(os.path.dirname(filename), - toctree) - continue - - m = template_arg_re.match(line) - if m: - template = m.group(1).strip() - continue - - if line.strip().startswith(':'): - continue # skip options - - m = autosummary_item_re.match(line) - if m: - name = m.group(1).strip() - if name.startswith('~'): - name = name[1:] - if current_module and \ - not name.startswith(current_module + '.'): - name = "%s.%s" % (current_module, name) - documented.append(AutosummaryEntry(name, toctree, template, recursive)) - continue - - if not line.strip() or line.startswith(base_indent + " "): - continue - - in_autosummary = False - - m = autosummary_re.match(line) - if m: - in_autosummary = True - base_indent = m.group(1) - recursive = False - toctree = None - template = None - continue - - m = automodule_re.search(line) - if m: - current_module = m.group(1).strip() - # recurse into the automodule docstring - documented.extend(find_autosummary_in_docstring( - current_module, filename=filename)) - continue - - m = module_re.match(line) - if m: - current_module = m.group(2) - continue - - return documented - - -def get_parser() -> argparse.ArgumentParser: - parser = argparse.ArgumentParser( - usage='%(prog)s [OPTIONS] ...', - epilog=__('For more information, visit .'), - description=__(""" -Generate ReStructuredText using autosummary directives. - -sphinx-autogen is a frontend to sphinx.ext.autosummary.generate. It generates -the reStructuredText files from the autosummary directives contained in the -given input files. - -The format of the autosummary directive is documented in the -``sphinx.ext.autosummary`` Python module and can be read using:: - - pydoc sphinx.ext.autosummary -""")) - - parser.add_argument('--version', action='version', dest='show_version', - version='%%(prog)s %s' % __display_version__) - - parser.add_argument('source_file', nargs='+', - help=__('source files to generate rST files for')) - - parser.add_argument('-o', '--output-dir', action='store', - dest='output_dir', - help=__('directory to place all output in')) - parser.add_argument('-s', '--suffix', action='store', dest='suffix', - default='rst', - help=__('default suffix for files (default: ' - '%(default)s)')) - parser.add_argument('-t', '--templates', action='store', dest='templates', - default=None, - help=__('custom template directory (default: ' - '%(default)s)')) - parser.add_argument('-i', '--imported-members', action='store_true', - dest='imported_members', default=False, - help=__('document imported members (default: ' - '%(default)s)')) - parser.add_argument('-a', '--respect-module-all', action='store_true', - dest='respect_module_all', default=False, - help=__('document exactly the members in module __all__ attribute. ' - '(default: %(default)s)')) - - return parser - - -def main(argv: List[str] = sys.argv[1:]) -> None: - sphinx.locale.setlocale(locale.LC_ALL, '') - sphinx.locale.init_console(os.path.join(package_dir, 'locale'), 'sphinx') - translator, _ = sphinx.locale.init([], None) - - app = DummyApplication(translator) - logging.setup(app, sys.stdout, sys.stderr) # type: ignore - setup_documenters(app) - args = get_parser().parse_args(argv) - - if args.templates: - app.config.templates_path.append(path.abspath(args.templates)) - app.config.autosummary_ignore_module_all = not args.respect_module_all # type: ignore - - generate_autosummary_docs(args.source_file, args.output_dir, - '.' + args.suffix, - imported_members=args.imported_members, - app=app) - - -if __name__ == '__main__': - main() diff --git a/docs/probability/docs/_ext/overwriteobjectiondirective.txt b/docs/probability/docs/_ext/overwriteobjectiondirective.txt deleted file mode 100644 index e7ffdfe09a737771ead4a9c2ce1d0b945bb49947..0000000000000000000000000000000000000000 --- a/docs/probability/docs/_ext/overwriteobjectiondirective.txt +++ /dev/null @@ -1,374 +0,0 @@ -""" - sphinx.directives - ~~~~~~~~~~~~~~~~~ - - Handlers for additional ReST directives. - - :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re -import inspect -import importlib -from functools import reduce -from typing import TYPE_CHECKING, Any, Dict, Generic, List, Tuple, TypeVar, cast - -from docutils import nodes -from docutils.nodes import Node -from docutils.parsers.rst import directives, roles - -from sphinx import addnodes -from sphinx.addnodes import desc_signature -from sphinx.deprecation import RemovedInSphinx50Warning, deprecated_alias -from sphinx.util import docutils, logging -from sphinx.util.docfields import DocFieldTransformer, Field, TypedField -from sphinx.util.docutils import SphinxDirective -from sphinx.util.typing import OptionSpec - -if TYPE_CHECKING: - from sphinx.application import Sphinx - - -# RE to strip backslash escapes -nl_escape_re = re.compile(r'\\\n') -strip_backslash_re = re.compile(r'\\(.)') - -T = TypeVar('T') -logger = logging.getLogger(__name__) - -def optional_int(argument: str) -> int: - """ - Check for an integer argument or None value; raise ``ValueError`` if not. - """ - if argument is None: - return None - else: - value = int(argument) - if value < 0: - raise ValueError('negative value; must be positive or zero') - return value - -def get_api(fullname): - """ - 获取接口对象。 - - :param fullname: 接口名全称 - :return: 属性对象或None(如果不存在) - """ - main_module = fullname.split('.')[0] - main_import = importlib.import_module(main_module) - - try: - return reduce(getattr, fullname.split('.')[1:], main_import) - except AttributeError: - return None - -def get_example(name: str): - try: - api_doc = inspect.getdoc(get_api(name)) - example_str = re.findall(r'Examples:\n([\w\W]*?)(\n\n|$)', api_doc) - if not example_str: - return [] - example_str = re.sub(r'\n\s+', r'\n', example_str[0][0]) - example_str = example_str.strip() - example_list = example_str.split('\n') - return ["", "**样例:**", ""] + example_list + [""] - except: - return [] - -def get_platforms(name: str): - try: - api_doc = inspect.getdoc(get_api(name)) - example_str = re.findall(r'Supported Platforms:\n\s+(.*?)\n\n', api_doc) - if not example_str: - example_str_leak = re.findall(r'Supported Platforms:\n\s+(.*)', api_doc) - if example_str_leak: - example_str = example_str_leak[0].strip() - example_list = example_str.split('\n') - example_list = [' ' + example_list[0]] - return ["", "支持平台:"] + example_list + [""] - return [] - example_str = example_str[0].strip() - example_list = example_str.split('\n') - example_list = [' ' + example_list[0]] - return ["", "支持平台:"] + example_list + [""] - except: - return [] - -class ObjectDescription(SphinxDirective, Generic[T]): - """ - Directive to describe a class, function or similar object. Not used - directly, but subclassed (in domain-specific directives) to add custom - behavior. - """ - - has_content = True - required_arguments = 1 - optional_arguments = 0 - final_argument_whitespace = True - option_spec: OptionSpec = { - 'noindex': directives.flag, - } # type: Dict[str, DirectiveOption] - - # types of doc fields that this directive handles, see sphinx.util.docfields - doc_field_types: List[Field] = [] - domain: str = None - objtype: str = None - indexnode: addnodes.index = None - - # Warning: this might be removed in future version. Don't touch this from extensions. - _doc_field_type_map = {} # type: Dict[str, Tuple[Field, bool]] - - def get_field_type_map(self) -> Dict[str, Tuple[Field, bool]]: - if self._doc_field_type_map == {}: - self._doc_field_type_map = {} - for field in self.doc_field_types: - for name in field.names: - self._doc_field_type_map[name] = (field, False) - - if field.is_typed: - typed_field = cast(TypedField, field) - for name in typed_field.typenames: - self._doc_field_type_map[name] = (field, True) - - return self._doc_field_type_map - - def get_signatures(self) -> List[str]: - """ - Retrieve the signatures to document from the directive arguments. By - default, signatures are given as arguments, one per line. - - Backslash-escaping of newlines is supported. - """ - lines = nl_escape_re.sub('', self.arguments[0]).split('\n') - if self.config.strip_signature_backslash: - # remove backslashes to support (dummy) escapes; helps Vim highlighting - return [strip_backslash_re.sub(r'\1', line.strip()) for line in lines] - else: - return [line.strip() for line in lines] - - def handle_signature(self, sig: str, signode: desc_signature) -> Any: - """ - Parse the signature *sig* into individual nodes and append them to - *signode*. If ValueError is raised, parsing is aborted and the whole - *sig* is put into a single desc_name node. - - The return value should be a value that identifies the object. It is - passed to :meth:`add_target_and_index()` unchanged, and otherwise only - used to skip duplicates. - """ - raise ValueError - - def add_target_and_index(self, name: Any, sig: str, signode: desc_signature) -> None: - """ - Add cross-reference IDs and entries to self.indexnode, if applicable. - - *name* is whatever :meth:`handle_signature()` returned. - """ - return # do nothing by default - - def before_content(self) -> None: - """ - Called before parsing content. Used to set information about the current - directive context on the build environment. - """ - pass - - def transform_content(self, contentnode: addnodes.desc_content) -> None: - """ - Called after creating the content through nested parsing, - but before the ``object-description-transform`` event is emitted, - and before the info-fields are transformed. - Can be used to manipulate the content. - """ - pass - - def after_content(self) -> None: - """ - Called after parsing content. Used to reset information about the - current directive context on the build environment. - """ - pass - - def check_class_end(self, content): - for i in content: - if not i.startswith('.. include::') and i != "\n" and i != "": - return False - return True - - def extend_items(self, rst_file, start_num, num): - ls = [] - for i in range(1, num+1): - ls.append((rst_file, start_num+i)) - return ls - - def run(self) -> List[Node]: - """ - Main directive entry function, called by docutils upon encountering the - directive. - - This directive is meant to be quite easily subclassable, so it delegates - to several additional methods. What it does: - - * find out if called as a domain-specific directive, set self.domain - * create a `desc` node to fit all description inside - * parse standard options, currently `noindex` - * create an index node if needed as self.indexnode - * parse all given signatures (as returned by self.get_signatures()) - using self.handle_signature(), which should either return a name - or raise ValueError - * add index entries using self.add_target_and_index() - * parse the content and handle doc fields in it - """ - if ':' in self.name: - self.domain, self.objtype = self.name.split(':', 1) - else: - self.domain, self.objtype = '', self.name - self.indexnode = addnodes.index(entries=[]) - - node = addnodes.desc() - node.document = self.state.document - node['domain'] = self.domain - # 'desctype' is a backwards compatible attribute - node['objtype'] = node['desctype'] = self.objtype - node['noindex'] = noindex = ('noindex' in self.options) - if self.domain: - node['classes'].append(self.domain) - node['classes'].append(node['objtype']) - - self.names: List[T] = [] - signatures = self.get_signatures() - for sig in signatures: - # add a signature node for each signature in the current unit - # and add a reference target for it - signode = addnodes.desc_signature(sig, '') - self.set_source_info(signode) - node.append(signode) - try: - # name can also be a tuple, e.g. (classname, objname); - # this is strictly domain-specific (i.e. no assumptions may - # be made in this base class) - name = self.handle_signature(sig, signode) - except ValueError: - # signature parsing failed - signode.clear() - signode += addnodes.desc_name(sig, sig) - continue # we don't want an index entry here - if name not in self.names: - self.names.append(name) - if not noindex: - # only add target and index entry if this is the first - # description of the object with this name in this desc block - self.add_target_and_index(name, sig, signode) - - contentnode = addnodes.desc_content() - node.append(contentnode) - if self.names: - # needed for association of version{added,changed} directives - self.env.temp_data['object'] = self.names[0] - self.before_content() - try: - example = get_example(self.names[0][0]) - platforms = get_platforms(self.names[0][0]) - except Exception as e: - example = '' - platforms = '' - logger.warning(f'Error API names in {self.arguments[0]}.') - logger.warning(f'{e}') - extra = platforms + example - if extra: - if self.objtype == "method": - self.content.data.extend(extra) - else: - index_num = 0 - for num, i in enumerate(self.content.data): - if i.startswith('.. py:method::') or self.check_class_end(self.content.data[num:]): - index_num = num - break - if index_num: - count = len(self.content.data) - for i in extra: - self.content.data.insert(index_num-count, i) - else: - self.content.data.extend(extra) - try: - self.content.items.extend(self.extend_items(self.content.items[0][0], self.content.items[-1][1], len(extra))) - except Exception as e: - logger.warning(f'{e}') - self.state.nested_parse(self.content, self.content_offset, contentnode) - self.transform_content(contentnode) - self.env.app.emit('object-description-transform', - self.domain, self.objtype, contentnode) - DocFieldTransformer(self).transform_all(contentnode) - self.env.temp_data['object'] = None - self.after_content() - return [self.indexnode, node] - - -class DefaultRole(SphinxDirective): - """ - Set the default interpreted text role. Overridden from docutils. - """ - - optional_arguments = 1 - final_argument_whitespace = False - - def run(self) -> List[Node]: - if not self.arguments: - docutils.unregister_role('') - return [] - role_name = self.arguments[0] - role, messages = roles.role(role_name, self.state_machine.language, - self.lineno, self.state.reporter) - if role: - docutils.register_role('', role) - self.env.temp_data['default_role'] = role_name - else: - literal_block = nodes.literal_block(self.block_text, self.block_text) - reporter = self.state.reporter - error = reporter.error('Unknown interpreted text role "%s".' % role_name, - literal_block, line=self.lineno) - messages += [error] - - return cast(List[nodes.Node], messages) - - -class DefaultDomain(SphinxDirective): - """ - Directive to (re-)set the default domain for this source file. - """ - - has_content = False - required_arguments = 1 - optional_arguments = 0 - final_argument_whitespace = False - option_spec = {} # type: Dict - - def run(self) -> List[Node]: - domain_name = self.arguments[0].lower() - # if domain_name not in env.domains: - # # try searching by label - # for domain in env.domains.values(): - # if domain.label.lower() == domain_name: - # domain_name = domain.name - # break - self.env.temp_data['default_domain'] = self.env.domains.get(domain_name) - return [] - -def setup(app: "Sphinx") -> Dict[str, Any]: - app.add_config_value("strip_signature_backslash", False, 'env') - directives.register_directive('default-role', DefaultRole) - directives.register_directive('default-domain', DefaultDomain) - directives.register_directive('describe', ObjectDescription) - # new, more consistent, name - directives.register_directive('object', ObjectDescription) - - app.add_event('object-description-transform') - - return { - 'version': 'builtin', - 'parallel_read_safe': True, - 'parallel_write_safe': True, - } - diff --git a/docs/probability/docs/_ext/overwriteviewcode.txt b/docs/probability/docs/_ext/overwriteviewcode.txt deleted file mode 100644 index 172780ec56b3ed90e7b0add617257a618cf38ee0..0000000000000000000000000000000000000000 --- a/docs/probability/docs/_ext/overwriteviewcode.txt +++ /dev/null @@ -1,378 +0,0 @@ -""" - sphinx.ext.viewcode - ~~~~~~~~~~~~~~~~~~~ - - Add links to module code in Python object descriptions. - - :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import posixpath -import traceback -import warnings -from os import path -from typing import Any, Dict, Generator, Iterable, Optional, Set, Tuple, cast - -from docutils import nodes -from docutils.nodes import Element, Node - -import sphinx -from sphinx import addnodes -from sphinx.application import Sphinx -from sphinx.builders import Builder -from sphinx.builders.html import StandaloneHTMLBuilder -from sphinx.deprecation import RemovedInSphinx50Warning -from sphinx.environment import BuildEnvironment -from sphinx.locale import _, __ -from sphinx.pycode import ModuleAnalyzer -from sphinx.transforms.post_transforms import SphinxPostTransform -from sphinx.util import get_full_modname, logging, status_iterator -from sphinx.util.nodes import make_refnode - - -logger = logging.getLogger(__name__) - - -OUTPUT_DIRNAME = '_modules' - - -class viewcode_anchor(Element): - """Node for viewcode anchors. - - This node will be processed in the resolving phase. - For viewcode supported builders, they will be all converted to the anchors. - For not supported builders, they will be removed. - """ - - -def _get_full_modname(app: Sphinx, modname: str, attribute: str) -> Optional[str]: - try: - return get_full_modname(modname, attribute) - except AttributeError: - # sphinx.ext.viewcode can't follow class instance attribute - # then AttributeError logging output only verbose mode. - logger.verbose('Didn\'t find %s in %s', attribute, modname) - return None - except Exception as e: - # sphinx.ext.viewcode follow python domain directives. - # because of that, if there are no real modules exists that specified - # by py:function or other directives, viewcode emits a lot of warnings. - # It should be displayed only verbose mode. - logger.verbose(traceback.format_exc().rstrip()) - logger.verbose('viewcode can\'t import %s, failed with error "%s"', modname, e) - return None - - -def is_supported_builder(builder: Builder) -> bool: - if builder.format != 'html': - return False - elif builder.name == 'singlehtml': - return False - elif builder.name.startswith('epub') and not builder.config.viewcode_enable_epub: - return False - else: - return True - - -def doctree_read(app: Sphinx, doctree: Node) -> None: - env = app.builder.env - if not hasattr(env, '_viewcode_modules'): - env._viewcode_modules = {} # type: ignore - - def has_tag(modname: str, fullname: str, docname: str, refname: str) -> bool: - entry = env._viewcode_modules.get(modname, None) # type: ignore - if entry is False: - return False - - code_tags = app.emit_firstresult('viewcode-find-source', modname) - if code_tags is None: - try: - analyzer = ModuleAnalyzer.for_module(modname) - analyzer.find_tags() - except Exception: - env._viewcode_modules[modname] = False # type: ignore - return False - - code = analyzer.code - tags = analyzer.tags - else: - code, tags = code_tags - - if entry is None or entry[0] != code: - entry = code, tags, {}, refname - env._viewcode_modules[modname] = entry # type: ignore - _, tags, used, _ = entry - if fullname in tags: - used[fullname] = docname - return True - - return False - - for objnode in list(doctree.findall(addnodes.desc)): - if objnode.get('domain') != 'py': - continue - names: Set[str] = set() - for signode in objnode: - if not isinstance(signode, addnodes.desc_signature): - continue - modname = signode.get('module') - fullname = signode.get('fullname') - try: - if fullname and modname==None: - if fullname.split('.')[-1].lower() == fullname.split('.')[-1] and fullname.split('.')[-2].lower() != fullname.split('.')[-2]: - modname = '.'.join(fullname.split('.')[:-2]) - fullname = '.'.join(fullname.split('.')[-2:]) - else: - modname = '.'.join(fullname.split('.')[:-1]) - fullname = fullname.split('.')[-1] - fullname_new = fullname - except Exception: - logger.warning(f'error_modename:{modname}') - logger.warning(f'error_fullname:{fullname}') - refname = modname - if env.config.viewcode_follow_imported_members: - new_modname = app.emit_firstresult( - 'viewcode-follow-imported', modname, fullname, - ) - if not new_modname: - new_modname = _get_full_modname(app, modname, fullname) - modname = new_modname - # logger.warning(f'new_modename:{modname}') - if not modname: - continue - # fullname = signode.get('fullname') - # if fullname and modname==None: - fullname = fullname_new - if not has_tag(modname, fullname, env.docname, refname): - continue - if fullname in names: - # only one link per name, please - continue - names.add(fullname) - pagename = posixpath.join(OUTPUT_DIRNAME, modname.replace('.', '/')) - signode += viewcode_anchor(reftarget=pagename, refid=fullname, refdoc=env.docname) - - -def env_merge_info(app: Sphinx, env: BuildEnvironment, docnames: Iterable[str], - other: BuildEnvironment) -> None: - if not hasattr(other, '_viewcode_modules'): - return - # create a _viewcode_modules dict on the main environment - if not hasattr(env, '_viewcode_modules'): - env._viewcode_modules = {} # type: ignore - # now merge in the information from the subprocess - for modname, entry in other._viewcode_modules.items(): # type: ignore - if modname not in env._viewcode_modules: # type: ignore - env._viewcode_modules[modname] = entry # type: ignore - else: - if env._viewcode_modules[modname]: # type: ignore - used = env._viewcode_modules[modname][2] # type: ignore - for fullname, docname in entry[2].items(): - if fullname not in used: - used[fullname] = docname - - -def env_purge_doc(app: Sphinx, env: BuildEnvironment, docname: str) -> None: - modules = getattr(env, '_viewcode_modules', {}) - - for modname, entry in list(modules.items()): - if entry is False: - continue - - code, tags, used, refname = entry - for fullname in list(used): - if used[fullname] == docname: - used.pop(fullname) - - if len(used) == 0: - modules.pop(modname) - - -class ViewcodeAnchorTransform(SphinxPostTransform): - """Convert or remove viewcode_anchor nodes depends on builder.""" - default_priority = 100 - - def run(self, **kwargs: Any) -> None: - if is_supported_builder(self.app.builder): - self.convert_viewcode_anchors() - else: - self.remove_viewcode_anchors() - - def convert_viewcode_anchors(self) -> None: - for node in self.document.findall(viewcode_anchor): - anchor = nodes.inline('', _('[源代码]'), classes=['viewcode-link']) - refnode = make_refnode(self.app.builder, node['refdoc'], node['reftarget'], - node['refid'], anchor) - node.replace_self(refnode) - - def remove_viewcode_anchors(self) -> None: - for node in list(self.document.findall(viewcode_anchor)): - node.parent.remove(node) - - -def missing_reference(app: Sphinx, env: BuildEnvironment, node: Element, contnode: Node - ) -> Optional[Node]: - # resolve our "viewcode" reference nodes -- they need special treatment - if node['reftype'] == 'viewcode': - warnings.warn('viewcode extension is no longer use pending_xref node. ' - 'Please update your extension.', RemovedInSphinx50Warning) - return make_refnode(app.builder, node['refdoc'], node['reftarget'], - node['refid'], contnode) - - return None - - -def get_module_filename(app: Sphinx, modname: str) -> Optional[str]: - """Get module filename for *modname*.""" - source_info = app.emit_firstresult('viewcode-find-source', modname) - if source_info: - return None - else: - try: - filename, source = ModuleAnalyzer.get_module_source(modname) - return filename - except Exception: - return None - - -def should_generate_module_page(app: Sphinx, modname: str) -> bool: - """Check generation of module page is needed.""" - module_filename = get_module_filename(app, modname) - if module_filename is None: - # Always (re-)generate module page when module filename is not found. - return True - - builder = cast(StandaloneHTMLBuilder, app.builder) - basename = modname.replace('.', '/') + builder.out_suffix - page_filename = path.join(app.outdir, '_modules/', basename) - - try: - if path.getmtime(module_filename) <= path.getmtime(page_filename): - # generation is not needed if the HTML page is newer than module file. - return False - except IOError: - pass - - return True - - -def collect_pages(app: Sphinx) -> Generator[Tuple[str, Dict[str, Any], str], None, None]: - env = app.builder.env - if not hasattr(env, '_viewcode_modules'): - return - if not is_supported_builder(app.builder): - return - highlighter = app.builder.highlighter # type: ignore - urito = app.builder.get_relative_uri - - modnames = set(env._viewcode_modules) # type: ignore - - for modname, entry in status_iterator( - sorted(env._viewcode_modules.items()), # type: ignore - __('highlighting module code... '), "blue", - len(env._viewcode_modules), # type: ignore - app.verbosity, lambda x: x[0]): - if not entry: - continue - if not should_generate_module_page(app, modname): - continue - - code, tags, used, refname = entry - # construct a page name for the highlighted source - pagename = posixpath.join(OUTPUT_DIRNAME, modname.replace('.', '/')) - # highlight the source using the builder's highlighter - if env.config.highlight_language in ('python3', 'default', 'none'): - lexer = env.config.highlight_language - else: - lexer = 'python' - highlighted = highlighter.highlight_block(code, lexer, linenos=False) - # split the code into lines - lines = highlighted.splitlines() - # split off wrap markup from the first line of the actual code - before, after = lines[0].split('
    ')
    -        lines[0:1] = [before + '
    ', after]
    -        # nothing to do for the last line; it always starts with 
    anyway - # now that we have code lines (starting at index 1), insert anchors for - # the collected tags (HACK: this only works if the tag boundaries are - # properly nested!) - maxindex = len(lines) - 1 - for name, docname in used.items(): - type, start, end = tags[name] - backlink = urito(pagename, docname) + '#' + refname + '.' + name - lines[start] = ( - '
    %s' % (name, backlink, _('[文档]')) + - lines[start]) - lines[min(end, maxindex)] += '
    ' - # try to find parents (for submodules) - parents = [] - parent = modname - while '.' in parent: - parent = parent.rsplit('.', 1)[0] - if parent in modnames: - parents.append({ - 'link': urito(pagename, - posixpath.join(OUTPUT_DIRNAME, parent.replace('.', '/'))), - 'title': parent}) - parents.append({'link': urito(pagename, posixpath.join(OUTPUT_DIRNAME, 'index')), - 'title': _('Module code')}) - parents.reverse() - # putting it all together - context = { - 'parents': parents, - 'title': modname, - 'body': (_('

    Source code for %s

    ') % modname + - '\n'.join(lines)), - } - yield (pagename, context, 'page.html') - - if not modnames: - return - - html = ['\n'] - # the stack logic is needed for using nested lists for submodules - stack = [''] - for modname in sorted(modnames): - if modname.startswith(stack[-1]): - stack.append(modname + '.') - html.append('
      ') - else: - stack.pop() - while not modname.startswith(stack[-1]): - stack.pop() - html.append('
    ') - stack.append(modname + '.') - html.append('
  • %s
  • \n' % ( - urito(posixpath.join(OUTPUT_DIRNAME, 'index'), - posixpath.join(OUTPUT_DIRNAME, modname.replace('.', '/'))), - modname)) - html.append('' * (len(stack) - 1)) - context = { - 'title': _('Overview: module code'), - 'body': (_('

    All modules for which code is available

    ') + - ''.join(html)), - } - - yield (posixpath.join(OUTPUT_DIRNAME, 'index'), context, 'page.html') - - -def setup(app: Sphinx) -> Dict[str, Any]: - app.add_config_value('viewcode_import', None, False) - app.add_config_value('viewcode_enable_epub', False, False) - app.add_config_value('viewcode_follow_imported_members', True, False) - app.connect('doctree-read', doctree_read) - app.connect('env-merge-info', env_merge_info) - app.connect('env-purge-doc', env_purge_doc) - app.connect('html-collect-pages', collect_pages) - app.connect('missing-reference', missing_reference) - # app.add_config_value('viewcode_include_modules', [], 'env') - # app.add_config_value('viewcode_exclude_modules', [], 'env') - app.add_event('viewcode-find-source') - app.add_event('viewcode-follow-imported') - app.add_post_transform(ViewcodeAnchorTransform) - return { - 'version': sphinx.__display_version__, - 'env_version': 1, - 'parallel_read_safe': True - } diff --git a/docs/probability/docs/requirements.txt b/docs/probability/docs/requirements.txt deleted file mode 100644 index 46904323e583b9e0318a9b7a0a7daa23b5e2b3e5..0000000000000000000000000000000000000000 --- a/docs/probability/docs/requirements.txt +++ /dev/null @@ -1,8 +0,0 @@ -sphinx == 4.4.0 -docutils == 0.17.1 -myst-parser == 0.18.1 -sphinx_rtd_theme == 1.0.0 -numpy -nbsphinx == 0.8.11 -IPython -jieba diff --git a/docs/probability/docs/source_en/_templates/classtemplate.rst b/docs/probability/docs/source_en/_templates/classtemplate.rst deleted file mode 100644 index 962ee2c6d63038c052b4e9d4a6e70d211f639282..0000000000000000000000000000000000000000 --- a/docs/probability/docs/source_en/_templates/classtemplate.rst +++ /dev/null @@ -1,26 +0,0 @@ -.. role:: hidden - :class: hidden-section - -.. currentmodule:: {{ module }} - -{% if objname in [] %} -{{ fullname | underline }} - -.. autofunction:: {{ fullname }} - -{% elif objname[0].istitle() %} -{{ fullname | underline }} - -.. autoclass:: {{ name }} - :members: - -{% else %} -{{ fullname | underline }} - -.. autofunction:: {{ fullname }} - -{% endif %} - -.. - autogenerated from _templates/classtemplate.rst - note it does not have :inherited-members: diff --git a/docs/probability/docs/source_en/_templates/classtemplate_inherited.rst b/docs/probability/docs/source_en/_templates/classtemplate_inherited.rst deleted file mode 100644 index 1cbd0ca2614acf4464bdbf11678100564d04b3c4..0000000000000000000000000000000000000000 --- a/docs/probability/docs/source_en/_templates/classtemplate_inherited.rst +++ /dev/null @@ -1,27 +0,0 @@ -.. role:: hidden - :class: hidden-section - -.. currentmodule:: {{ module }} - -{% if objname[0].istitle() %} -{{ fullname | underline }} - -.. autoclass:: {{ name }} - :inherited-members: - :members: - -{% elif fullname=="mindspore.numpy.ix\_" %} - -mindspore.numpy.ix\_ -==================== - -.. autofunction:: mindspore.numpy.ix_ - -{% else %} -{{ fullname | underline }} - -.. autofunction:: {{ fullname }} - -{% endif %} - -.. autogenerated from _templates/classtemplate_inherited.rst \ No newline at end of file diff --git a/docs/probability/docs/source_en/_templates/classtemplate_probability.rst b/docs/probability/docs/source_en/_templates/classtemplate_probability.rst deleted file mode 100644 index 6329880e1fc540de910b25d1724a2cfba8d501f2..0000000000000000000000000000000000000000 --- a/docs/probability/docs/source_en/_templates/classtemplate_probability.rst +++ /dev/null @@ -1,13 +0,0 @@ -.. role:: hidden - :class: hidden-section - -.. currentmodule:: {{ module }} - -{{ fullname | underline }} - -.. autoclass:: {{ name }} - :members: - -.. - autogenerated from _templates/classtemplate.rst - note it does not have :inherited-members: diff --git a/docs/probability/docs/source_en/conf.py b/docs/probability/docs/source_en/conf.py deleted file mode 100644 index f4263fb3e908d479b7568b785d15071fbfd3f1ff..0000000000000000000000000000000000000000 --- a/docs/probability/docs/source_en/conf.py +++ /dev/null @@ -1,196 +0,0 @@ -# Configuration file for the Sphinx documentation builder. -# -# This file only contains a selection of the most common options. For a full -# list see the documentation: -# https://www.sphinx-doc.org/en/master/usage/configuration.html - -# -- Path setup -------------------------------------------------------------- - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# -import os -import shutil -import sys -import IPython -import re -sys.path.append(os.path.abspath('../_ext')) -import sphinx.ext.autosummary.generate as g -from sphinx.ext import autodoc as sphinx_autodoc - -import mindspore - -# -- Project information ----------------------------------------------------- - -project = 'MindSpore Probability' -copyright = 'MindSpore' -author = 'MindSpore' - -# The full version, including alpha/beta/rc tags -release = 'master' - - -# -- General configuration --------------------------------------------------- - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -myst_enable_extensions = ["dollarmath", "amsmath"] - - -myst_heading_anchors = 5 -extensions = [ - 'sphinx.ext.autodoc', - 'sphinx.ext.autosummary', - 'sphinx.ext.doctest', - 'sphinx.ext.intersphinx', - 'sphinx.ext.todo', - 'sphinx.ext.coverage', - 'sphinx.ext.napoleon', - 'sphinx.ext.viewcode', - 'myst_parser', - 'nbsphinx', - 'sphinx.ext.mathjax', - 'IPython.sphinxext.ipython_console_highlighting' -] - -source_suffix = { - '.rst': 'restructuredtext', - '.md': 'markdown', -} - -# Add any paths that contain templates here, relative to this directory. -templates_path = ['_templates'] - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This pattern also affects html_static_path and html_extra_path. -mathjax_path = 'https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/mathjax/MathJax-3.2.2/es5/tex-mml-chtml.js' - -mathjax_options = { - 'async':'async' -} - -nbsphinx_requirejs_path = 'https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.6/require.min.js' - -nbsphinx_requirejs_options = { - "crossorigin": "anonymous", - "integrity": "sha256-1fEPhSsRKlFKGfK3eO710tEweHh1fwokU5wFGDHO+vg=" -} - -smartquotes_action = 'De' - -exclude_patterns = [] - -suppress_warnings = [ - 'nbsphinx', -] - -pygments_style = 'sphinx' - -autodoc_inherit_docstrings = False - -autosummary_generate = True - -autosummary_generate_overwrite = False - -# -- Options for HTML output ------------------------------------------------- - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = 'sphinx_rtd_theme' - -import sphinx_rtd_theme -layout_target = os.path.join(os.path.dirname(sphinx_rtd_theme.__file__), 'layout.html') -layout_src = '../../../../resource/_static/layout.html' -if os.path.exists(layout_target): - os.remove(layout_target) -shutil.copy(layout_src, layout_target) - -# Example configuration for intersphinx: refer to the Python standard library. -intersphinx_mapping = { - 'python': ('https://docs.python.org/3', '../../../../resource/python_objects.inv'), -} - -# overwriteautosummary_generate add view source for api and more autosummary class availably. -with open('../_ext/overwriteautosummary_generate.txt', 'r', encoding="utf8") as f: - exec(f.read(), g.__dict__) - -# Modify default signatures for autodoc. -autodoc_source_path = os.path.abspath(sphinx_autodoc.__file__) -autodoc_source_re = re.compile(r'stringify_signature\(.*?\)') -get_param_func_str = r"""\ -import re -import inspect as inspect_ - -def get_param_func(func): - try: - source_code = inspect_.getsource(func) - if func.__doc__: - source_code = source_code.replace(func.__doc__, '') - all_params_str = re.findall(r"def [\w_\d\-]+\(([\S\s]*?)(\):|\) ->.*?:)", source_code) - all_params = re.sub("(self|cls)(,|, )?", '', all_params_str[0][0].replace("\n", "").replace("'", "\"")) - return all_params - except: - return '' - -def get_obj(obj): - if isinstance(obj, type): - return obj.__init__ - - return obj -""" - -with open(autodoc_source_path, "r+", encoding="utf8") as f: - code_str = f.read() - code_str = autodoc_source_re.sub('"(" + get_param_func(get_obj(self.object)) + ")"', code_str, count=0) - exec(get_param_func_str, sphinx_autodoc.__dict__) - exec(code_str, sphinx_autodoc.__dict__) - -# get params for add view source -import json - -if os.path.exists('../../../../tools/generate_html/version.json'): - with open('../../../../tools/generate_html/version.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) -elif os.path.exists('../../../../tools/generate_html/daily_dev.json'): - with open('../../../../tools/generate_html/daily_dev.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) -elif os.path.exists('../../../../tools/generate_html/daily.json'): - with open('../../../../tools/generate_html/daily.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) - -if os.getenv("MS_PATH").split('/')[-1]: - copy_repo = os.getenv("MS_PATH").split('/')[-1] -else: - copy_repo = os.getenv("MS_PATH").split('/')[-2] - -branch = [version_inf[i]['branch'] for i in range(len(version_inf)) if version_inf[i]['name'] == copy_repo][0] -docs_branch = [version_inf[i]['branch'] for i in range(len(version_inf)) if version_inf[i]['name'] == 'tutorials'][0] -cst_module_name = 'mindspore' -repo_whl = 'mindspore/python/mindspore' -giturl = 'https://gitee.com/mindspore/' - -sys.path.append(os.path.abspath('../../../../resource/sphinx_ext')) -# import anchor_mod -import nbsphinx_mod - -sys.path.append(os.path.abspath('../../../../resource/search')) -import search_code - -sys.path.append(os.path.abspath('../../../../resource/custom_directives')) -from custom_directives import IncludeCodeDirective -from myautosummary import MsPlatformAutoSummary, MsNoteAutoSummary - -def setup(app): - app.add_directive('msplatformautosummary', MsPlatformAutoSummary) - app.add_directive('msnoteautosummary', MsNoteAutoSummary) - app.add_directive('includecode', IncludeCodeDirective) - app.add_config_value('docs_branch', '', True) - app.add_config_value('branch', '', True) - app.add_config_value('cst_module_name', '', True) - app.add_config_value('copy_repo', '', True) - app.add_config_value('giturl', '', True) - app.add_config_value('repo_whl', '', True) diff --git a/docs/probability/docs/source_en/index.rst b/docs/probability/docs/source_en/index.rst deleted file mode 100644 index f83cf9d9781126bf01ee0159cc03a0e56631ff08..0000000000000000000000000000000000000000 --- a/docs/probability/docs/source_en/index.rst +++ /dev/null @@ -1,41 +0,0 @@ -MindSpore Probability Documents -=================================== - -MindSpore Probability is a fusion suite seamlessly integrating Bayesian learning and deep learning, which provides a comprehensive probability learning library for building probability models and applying Bayesian inference. A deep learning model has a strong fitting capability, and the Bayesian theory has a good explainability. - -MindSpore Probability provides the following functions: - -- Abundant statistical distribution and common probabilistic inference algorithms -- Combinable probabilistic programming modules for developers to use the logic of the deep learning model to build a deep probabilistic model - -.. raw:: html - - - -Code repository address: - -Typical MindSpore Probability Application Scenarios ----------------------------------------------------- - -1. `Building the Bayesian Neural Network `_ - - Use the Bayesian neural network to classify images. - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: Installation - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: Guide - - using_bnn - probability - -.. toctree:: - :maxdepth: 1 - :caption: API References - - mindspore.nn.probability diff --git a/docs/probability/docs/source_en/mindspore.nn.probability.rst b/docs/probability/docs/source_en/mindspore.nn.probability.rst deleted file mode 100644 index de5d4d32a5aabe25fc07377dd05aafa5ffc2319a..0000000000000000000000000000000000000000 --- a/docs/probability/docs/source_en/mindspore.nn.probability.rst +++ /dev/null @@ -1,37 +0,0 @@ -mindspore.nn.probability -================================ - -.. automodule:: mindspore.nn.probability - -Bayesian Layers ---------------- - -.. msplatformautosummary:: - :toctree: nn_probability - :nosignatures: - :template: classtemplate_probability.rst - - mindspore.nn.probability.bnn_layers.ConvReparam - mindspore.nn.probability.bnn_layers.DenseLocalReparam - mindspore.nn.probability.bnn_layers.DenseReparam - -Prior and Posterior Distributions ----------------------------------- - -.. msplatformautosummary:: - :toctree: nn_probability - :nosignatures: - :template: classtemplate_probability.rst - - mindspore.nn.probability.bnn_layers.NormalPosterior - mindspore.nn.probability.bnn_layers.NormalPrior - -Bayesian Wrapper Functions ---------------------------- - -.. msplatformautosummary:: - :toctree: nn_probability - :nosignatures: - :template: classtemplate_probability.rst - - mindspore.nn.probability.bnn_layers.WithBNNLossCell diff --git a/docs/probability/docs/source_en/probability.md b/docs/probability/docs/source_en/probability.md deleted file mode 100644 index 60ea286221330e65f604f8aa5e1000ea3132f442..0000000000000000000000000000000000000000 --- a/docs/probability/docs/source_en/probability.md +++ /dev/null @@ -1,695 +0,0 @@ -# Deep Probabilistic Programming Library - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/probability/docs/source_en/probability.md) - -MindSpore deep probabilistic programming is to combine Bayesian learning with deep learning, including probability distribution, probability distribution mapping, deep probability network, probability inference algorithm, Bayesian layer, Bayesian conversion, and Bayesian toolkit. For professional Bayesian learning users, it provides probability sampling, inference algorithms, and model build libraries. On the other hand, advanced APIs are provided for users who are unfamiliar with Bayesian deep learning, so that they can use Bayesian models without changing the deep learning programming logic. - -## Probability Distribution - -Probability distribution (`mindspore.nn.probability.distribution`) is the basis of probabilistic programming. The `Distribution` class provides various probability statistics APIs, such as *pdf* for probability density, *cdf* for cumulative density, *kl_loss* for divergence calculation, and *sample* for sampling. Existing probability distribution examples include Gaussian distribution, Bernoulli distribution, exponential distribution, geometric distribution, and uniform distribution. - -### Probability Distribution Class - -- `Distribution`: base class of all probability distributions. - -- `Bernoulli`: Bernoulli distribution, with a parameter indicating the number of experiment successes. - -- `Exponential`: exponential distribution, with a rate parameter. - -- `Geometric`: geometric distribution, with a parameter indicating the probability of initial experiment success. - -- `Normal`: normal distribution (Gaussian distribution), with two parameters indicating the average value and standard deviation. - -- `Uniform`: uniform distribution, with two parameters indicating the minimum and maximum values on the axis. - -- `Categorical`: categorical distribution, with one parameter indicating the probability of each category. - -- `LogNormal`: lognormal distribution, with two parameters indicating the location and scale. - -- `Gumbel`: gumbel distribution, with two parameters indicating the location and scale. - -- `Logistic`: logistic distribution, with two parameters indicating the location and scale. - -- `Cauchy`: cauchy distribution, with two parameters indicating the location and scale. - -#### Distribution Base Class - -`Distribution` is the base class for all probability distributions. - -The `Distribution` class supports the following functions: `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, `log_survival`, `mean`, `sd`, `var`, `entropy`, `kl_loss`, `cross_entropy`, and `sample`. The input parameters vary according to the distribution. These functions can be used only in a derived class and their parameters are determined by the function implementation of the derived class. - -- `prob`: probability density function (PDF) or probability quality function (PMF) -- `log_prob`: log-like function -- `cdf`: cumulative distribution function (CDF) -- `log_cdf`: log-cumulative distribution function -- `survival_function`: survival function -- `log_survival`: logarithmic survival function -- `mean`: average value -- `sd`: standard deviation -- `var`: variance -- `entropy`: entropy -- `kl_loss`: Kullback-Leibler divergence -- `cross_entropy`: cross entropy of two probability distributions -- `sample`: random sampling of probability distribution -- `get_dist_args`: returns the parameters of the distribution used in the network -- `get_dist_type`: returns the type of the distribution - -#### Bernoulli Distribution - -Bernoulli distribution, inherited from the `Distribution` class. - -Properties are described as follows: - -- `Bernoulli.probs`: returns the probability of success in the Bernoulli experiment as a `Tensor`. - -The `Distribution` base class invokes the private API in the `Bernoulli` to implement the public APIs in the base class. `Bernoulli` supports the following public APIs: - -- `mean`, `mode`, `var`, and `sd`: The input parameter *probs1* that indicates the probability of experiment success is optional. -- `entropy`: The input parameter *probs1* that indicates the probability of experiment success is optional. -- `cross_entropy` and `kl_loss`: The input parameters *dist* and *probs1_b* are mandatory. *dist* indicates another distribution type. Currently, only *'Bernoulli'* is supported. *probs1_b* is the experiment success probability of distribution *b*. Parameter *probs1_a* of distribution *a* is optional. -- `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`: The input parameter *value* is mandatory. The input parameter *probs* that indicates the probability of experiment success is optional. -- `sample`: Optional input parameters include sample shape *shape* and experiment success probability *probs1*. -- `get_dist_args`: The input parameter *probs1* that indicates the probability of experiment success is optional. Return `(probs1,)` with type tuple. -- `get_dist_type`: return *'Bernoulli'*. - -#### Exponential Distribution - -Exponential distribution, inherited from the `Distribution` class. - -Properties are described as follows: - -- `Exponential.rate`: returns the rate parameter as a `Tensor`. - -The `Distribution` base class invokes the `Exponential` private API to implement the public APIs in the base class. `Exponential` supports the following public APIs: - -- `mean`, `mode`, `var`, and `sd`: The input rate parameter *rate* is optional. -- `entropy`: The input rate parameter *rate* is optional. -- `cross_entropy` and `kl_loss`: The input parameters *dist* and *rate_b* are mandatory. *dist* indicates the name of another distribution type. Currently, only *'Exponential'* is supported. *rate_b* is the rate parameter of distribution *b*. Parameter *rate_a* of distribution *a* is optional. -- `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`: The input parameter *value* is mandatory. The input rate parameter *rate*is optional. -- `sample`: Optional input parameters include sample shape *shape* and rate parameter *rate*. -- `get_dist_args`: The input rate parameter *rate* is optional. Return `(rate,)` with type tuple. -- `get_dist_type`: returns *'Exponential'*. - -#### Geometric Distribution - -Geometric distribution, inherited from the `Distribution` class. - -Properties are described as follows: - -- `Geometric.probs`: returns the probability of success in the Bernoulli experiment as a `Tensor`. - -The `Distribution` base class invokes the private API in the `Geometric` to implement the public APIs in the base class. `Geometric` supports the following public APIs: - -- `mean`, `mode`, `var`, and `sd`: The input parameter *probs1* that indicates the probability of experiment success is optional. -- `entropy`: The input parameter *probs1* that indicates the probability of experiment success is optional. -- `cross_entropy` and `kl_loss`: The input parameters *dist* and *probs1_b* are mandatory. *dist* indicates the name of another distribution type. Currently, only *'Geometric'* is supported. *probs1_b* is the experiment success probability of distribution *b*. Parameter *probs1_a* of distribution *a* is optional. -- `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`: The input parameter *value* is mandatory. The input parameter *probs1* that indicates the probability of experiment success is optional. -- `sample`: Optional input parameters include sample shape *shape* and experiment success probability *probs1*. -- `get_dist_args`: The input parameter *probs1* that indicates the probability of experiment success is optional. Return `(probs1,)` with type tuple. -- `get_dist_type`: returns *'Geometric'*. - -#### Normal Distribution - -Normal distribution (also known as Gaussian distribution), inherited from the `Distribution` class. - -The `Distribution` base class invokes the private API in the `Normal` to implement the public APIs in the base class. `Normal` supports the following public APIs: - -- `mean`, `mode`, `var`, and `sd`: Input parameters *mean* (for average value) and *sd* (for standard deviation) are optional. -- `entropy`: Input parameters *mean* (for average value) and *sd* (for standard deviation) are optional. -- `cross_entropy` and `kl_loss`: The input parameters *dist*, *mean_b*, and *sd_b* are mandatory. *dist* indicates the name of another distribution type. Currently, only *'Normal'* is supported. *mean_b* and *sd_b* indicate the mean value and standard deviation of distribution *b*, respectively. Input parameters mean value *mean_a* and standard deviation *sd_a* of distribution *a* are optional. -- `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`: The input parameter *value* is mandatory. Input parameters mean value *mean_a* and standard deviation *sd_a* are optional. -- `sample`: Input parameters sample shape *shape*, average value *mean_a*, and standard deviation *sd_a* are optional. -- `get_dist_args`: Input parameters mean value *mean* and standard deviation *sd* are optional. Return `(mean, sd)` with type tuple. -- `get_dist_type`: returns *'Normal'*. - -#### Uniform Distribution - -Uniform distribution, inherited from the `Distribution` class. - -Properties are described as follows: - -- `Uniform.low`: returns the minimum value as a `Tensor`. -- `Uniform.high`: returns the maximum value as a `Tensor`. - -The `Distribution` base class invokes `Uniform` to implement public APIs in the base class. `Uniform` supports the following public APIs: - -- `mean`, `mode`, `var`, and `sd`: Input parameters maximum value *high* and minimum value *low* are optional. -- `entropy`: Input parameters maximum value *high* and minimum value *low* are optional. -- `cross_entropy` and `kl_loss`: The input parameters *dist*, *high_b*, and *low_b* are mandatory. *dist* indicates the name of another distribution type. Currently, only *'Uniform'* is supported. *high_b* and *low_b* are parameters of distribution *b*. Input parameters maximum value *high* and minimum value *low* of distribution *a* are optional. -- `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`: The input parameter *value* is mandatory. Input parameters maximum value *high* and minimum value *low* are optional. -- `sample`: Input parameters *shape*, maximum value *high*, and minimum value *low* are optional. -- `get_dist_args`: Input parameters maximum value *high* and minimum value *low* are optional. Return `(low, high)` with type tuple. -- `get_dist_type`: returns *'Uniform'*. - -#### Categorical Distribution - -Categorical distribution, inherited from the `Distribution` class. - -Properties are described as follows: - -- `Categorical.probs`: returns the probability of each category as a `Tensor`. - -The `Distribution` base class invokes the private API in the `Categorical` to implement the public APIs in the base class. `Categorical` supports the following public APIs: - -- `mean`, `mode`, `var`, and `sd`: The input parameter *probs* that indicates the probability of each category is optional. -- `entropy`: The input parameter *probs* that indicates the probability of each category is optional. -- `cross_entropy` and `kl_loss`: The input parameters *dist* and *probs_b* are mandatory. *dist* indicates the name of another distribution type. Currently, only *'Categorical'* is supported. *probs_b* is the categories' probabilities of distribution *b*. Parameter *probs_a* of distribution *a* is optional. -- `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`: The input parameter *value* is mandatory. The input parameter *probs* that indicates the probability of each category is optional. -- `sample`: Optional input parameters include sample shape *shape* and the categories' probabilities *probs*. -- `get_dist_args`: The input parameter *probs* that indicates the probability of each category is optional. Return `(probs,)` with type tuple. -- `get_dist_type`: returns *'Categorical'*. - -#### LogNormal Distribution - -LogNormal distribution, inherited from the `TransformedDistribution` class, constructed by `Exp` Bijector and `Normal` Distribution. - -Properties are described as follows: - -- `LogNormal.loc`: returns the location parameter as a `Tensor`. -- `LogNormal.scale`: returns the scale parameter as a `Tensor`. - -The `Distribution` base class invokes the private API in the `LogNormal` and `TransformedDistribution` to implement the public APIs in the base class. `LogNormal` supports the following public APIs: - -- `mean`, `mode`, `var`, and `sd`: Input parameters *loc* (for location) and *scale* (for scale) are optional. -- `entropy`: Input parameters *loc* (for location) and *scale* (for scale) are optional. -- `cross_entropy` and `kl_loss`: The input parameters *dist*, *loc_b*, and *scale_b* are mandatory. *dist* indicates the name of another distribution type. Currently, only *'LogNormal'* is supported. *loc_b* and *scale_b* indicate the location and scale of distribution *b*, respectively. Input parameters *loc* and *scale* of distribution *a* are optional. -- `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`: The input parameter *value* is mandatory. Input parameters location *loc* and scale *scale* are optional. -- `sample`: Input parameters sample shape *shape*, location *loc* and scale *scale* are optional. -- `get_dist_args`: Input parameters location *loc* and scale *scale* are optional. Return `(loc, scale)` with type tuple. -- `get_dist_type`: returns *'LogNormal'*. - -#### Cauchy Distribution - -Cauchy distribution, inherited from the `Distribution` class. - -Properties are described as follows: - -- `Cauchy.loc`: returns the location parameter as a `Tensor`. -- `Cauchy.scale`: returns the scale parameter as a `Tensor`. - -The `Distribution` base class invokes the private API in the `Cauchy` to implement the public APIs in the base class. `Cauchy` supports the following public APIs: - -- `entropy`: Input parameters *loc* (for location) and *scale* (for scale) are optional. -- `cross_entropy` and `kl_loss`: The input parameters *dist*, *loc_b*, and *scale_b* are mandatory. *dist* indicates the name of another distribution type. Currently, only *'Cauchy'* is supported. *loc_b* and *scale_b* indicate the location and scale of distribution *b*, respectively. Input parameters *loc* and *scale* of distribution *a* are optional. -- `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`: The input parameter *value* is mandatory. Input parameters location *loc* and scale *scale* are optional. -- `sample`: Input parameters sample shape *shape*, location *loc* and scale *scale* are optional. -- `get_dist_args`: Input parameters location *loc* and scale *scale* are optional. Return `(loc, scale)` with type tuple. -- `get_dist_type`: returns *'Cauchy'*. - -#### Gumbel Distribution - -Gumbel distribution, inherited from the `TransformedDistribution` class, constructed by `GumbelCDF` Bijector and `Uniform` Distribution. - -Properties are described as follows: - -- `Gumbel.loc`: returns the location parameter as a `Tensor`. -- `Gumbel.scale`: returns the scale parameter as a `Tensor`. - -The `Distribution` base class invokes the private API in the `Gumbel` and `TransformedDistribution` to implement the public APIs in the base class. `Gumbel` supports the following public APIs: - -- `mean`, `mode`, `var`, and `sd`: No parameter. -- `entropy`: No parameter. -- `cross_entropy` and `kl_loss`: The input parameters *dist*, *loc_b*, and *scale_b* are mandatory. *dist* indicates the name of another distribution type. Currently, only *'Gumbel'* is supported. *loc_b* and *scale_b* indicate the location and scale of distribution *b*. -- `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`: The input parameter *value* is mandatory. -- `sample`: Input parameters sample shape *shape* is optional. -- `get_dist_args`: Input parameters location *loc* and scale *scale* are optional. Return `(loc, scale)` with type tuple. -- `get_dist_type`: returns *'Gumbel'*. - -#### Logistic Distribution - -Logistic distribution, inherited from the `Distribution` class. - -Properties are described as follows: - -- `Logistic.loc`: returns the location parameter as a `Tensor`. -- `Logistic.scale`: returns the scale parameter as a `Tensor`. - -The `Distribution` base class invokes the private API in the `Logistic` and `TransformedDistribution` to implement the public APIs in the base class. `Logistic` supports the following public APIs: - -- `mean`, `mode`, `var`, and `sd`: Input parameters *loc* (for location) and *scale* (for scale) are optional. -- `entropy`: Input parameters *loc* (for location) and *scale* (for scale) are optional. -- `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`: The input parameter *value* is mandatory. Input parameters location *loc* and scale *scale* are optional. -- `sample`: Input parameters sample shape *shape*, location *loc* and scale *scale* are optional. -- `get_dist_args`: Input parameters location *loc* and scale *scale* are optional. Return `(loc, scale)` with type tuple. -- `get_dist_type`: returns *'Logistic'*. - -#### Poisson Distribution - -Poisson distribution, inherited from the `Distribution` class. - -Properties are described as follows: - -- `Poisson.rate`: returns the rate as a `Tensor`. - -The `Distribution` base class invokes the private API in the `Poisson` to implement the public APIs in the base class. `Poisson` supports the following public APIs: - -- `mean`, `mode`, `var`, and `sd`: The input parameter *rate* is optional. -- `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`: The input parameter *value* is mandatory. The input parameter rate* is optional. -- `sample`: Optional input parameters include sample shape *shape* and the parameter *rate*. -- `get_dist_args`: The input parameter *rate* is optional. Return `(rate,)` with type tuple. -- `get_dist_type`: returns *'Poisson'*. - -#### Gamma Distribution - -Gamma distribution, inherited from the `Distribution` class. - -Properties are described as follows: - -- `Gamma.concentration`: returns the concentration as a `Tensor`. -- `Gamma.rate`: returns the rate as a `Tensor`. - -The `Distribution` base class invokes the private API in the `Gamma` to implement the public APIs in the base class. `Gamma` supports the following public APIs: - -- `mean`, `mode`, `var`, and `sd`: The input parameters *concentration* and *rate* are optional. -- `entropy`: The input parameters *concentration* and *rate* are optional. -- `cross_entropy` and `kl_loss`: The input parameters *dist*, *concentration_b* and *rate_b* are mandatory. *dist* indicates the name of another distribution type. Currently, only *'Gamma'* is supported. *concentration_b* and *rate_b* are the parameters of distribution *b*. The input parameters *concentration_a* and *rate_a* for distribution *a* are optional. -- `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`: The input parameter *value* is mandatory. The input parameters *concentration* and *rate* are optional. -- `sample`: Optional input parameters include sample shape *shape* and parameters *concentration* and *rate*. -- `get_dist_args`: The input parameters *concentration* and *rate* are optional. Return `(concentration, rate)` with type tuple. -- `get_dist_type`: returns *'Gamma'*. - -#### Beta Distribution - -Beta distribution, inherited from the `Distribution` class. - -Properties are described as follows: - -- `Beta.concentration1`: returns the rate as a `Tensor`. -- `Beta.concentration0`: returns the rate as a `Tensor`. - -The `Distribution` base class invokes the private API in the `Beta` to implement the public APIs in the base class. `Beta` supports the following public APIs: - -- `mean`, `mode`, `var`, and `sd`: The input parameters *concentration1* and *concentration0* are optional. -- `entropy`: The input parameters *concentration1* and *concentration0* are optional. -- `cross_entropy` and `kl_loss`: The input parameters *dist*, *concentration1_b* and *rateconcentration0_b* are mandatory. *dist* indicates the name of another distribution type. Currently, only *'Beta'* is supported. *concentration1_b* and *concentration0_b* are the parameters of distribution *b*. The input parameters *concentratio1n_a* and *concentration0_a* for distribution *a* are optional. -- `prob` and `log_prob`: The input parameter *value* is mandatory. The input parameters *concentration1* and *concentration0* are optional. -- `sample`: Optional input parameters include sample shape *shape* and parameters *concentration1* and *concentration0*. -- `get_dist_args`: The input parameters *concentration1* and *concentration0* are optional. Return `(concentration1, concentration0)` with type tuple. -- `get_dist_type`: returns *'Beta'*. - -### Probability Distribution Class Application in PyNative Mode - -`Distribution` subclasses can be used in **PyNative** mode. - -Use `Normal` as an example. Create a normal distribution whose average value is 0.0 and standard deviation is 1.0. - -```python -import mindspore as ms -import mindspore.nn.probability.distribution as msd -ms.set_context(mode=ms.PYNATIVE_MODE, device_target="GPU") - -my_normal = msd.Normal(0.0, 1.0, dtype=ms.float32) - -mean = my_normal.mean() -var = my_normal.var() -entropy = my_normal.entropy() - -value = ms.Tensor([-0.5, 0.0, 0.5], dtype=ms.float32) -prob = my_normal.prob(value) -cdf = my_normal.cdf(value) - -mean_b = ms.Tensor(1.0, dtype=ms.float32) -sd_b = ms.Tensor(2.0, dtype=ms.float32) -kl = my_normal.kl_loss('Normal', mean_b, sd_b) - -# get the distribution args as a tuple -dist_arg = my_normal.get_dist_args() - -print("mean: ", mean) -print("var: ", var) -print("entropy: ", entropy) -print("prob: ", prob) -print("cdf: ", cdf) -print("kl: ", kl) -print("dist_arg: ", dist_arg) -``` - -The output is as follows: - -```text -mean: 0.0 -var: 1.0 -entropy: 1.4189385 -prob: [0.35206532 0.3989423 0.35206532] -cdf: [0.30853754 0.5 0.69146246] -kl: 0.44314718 -dist_arg: (Tensor(shape=[], dtype=Float32, value= 0), Tensor(shape=[], dtype=Float32, value= 1)) -``` - -### Probability Distribution Class Application in Graph Mode - -In graph mode, `Distribution` subclasses can be used on the network. - -```python -import mindspore.nn as nn -import mindspore as ms -import mindspore.nn.probability.distribution as msd -ms.set_context(mode=ms.GRAPH_MODE) - -class Net(nn.Cell): - def __init__(self): - super(Net, self).__init__() - self.normal = msd.Normal(0.0, 1.0, dtype=ms.float32) - - def construct(self, value, mean, sd): - pdf = self.normal.prob(value) - kl = self.normal.kl_loss("Normal", mean, sd) - return pdf, kl - -net = Net() -value = ms.Tensor([-0.5, 0.0, 0.5], dtype=ms.float32) -mean = ms.Tensor(1.0, dtype=ms.float32) -sd = ms.Tensor(1.0, dtype=ms.float32) -pdf, kl = net(value, mean, sd) -print("pdf: ", pdf) -print("kl: ", kl) -``` - -The output is as follows: - -```text -pdf: [0.35206532 0.3989423 0.35206532] -kl: 0.5 -``` - -### TransformedDistribution Class API Design - -`TransformedDistribution`, inherited from `Distribution`, is a base class for mathematical distribution that can be obtained by mapping f(x) changes. The APIs are as follows: - -1. Properties - - - `bijector`: returns the distribution transformation method. - - `distribution`: returns the original distribution. - - `is_linear_transformation`: returns the linear transformation flag. - -2. API functions (The parameters of the following APIs are the same as those of the corresponding APIs of `distribution` in the constructor function.) - - - `cdf`: cumulative distribution function (CDF) - - `log_cdf`: log-cumulative distribution function - - `survival_function`: survival function - - `log_survival`: logarithmic survival function - - `prob`: probability density function (PDF) or probability quality function (PMF) - - `log_prob`: log-like function - - `sample`: random sampling - - `mean`: a non-parametric function, which can be invoked only when `Bijector.is_constant_jacobian=true` is invoked. - -### Invoking a TransformedDistribution Instance in PyNative Mode - -The `TransformedDistribution` subclass can be used in **PyNative** mode. - -Here a `TransformedDistribution` instance is constructed, using the `Normal` distribution as the distribution class to be transformed, and `Exp` as the mapping transformation, which generates the `LogNormal` distribution. - -```python -import numpy as np -import mindspore.nn as nn -import mindspore.nn.probability.bijector as msb -import mindspore.nn.probability.distribution as msd -import mindspore as ms - -ms.set_context(mode=ms.PYNATIVE_MODE) - -normal = msd.Normal(0.0, 1.0, dtype=ms.float32) -exp = msb.Exp() -LogNormal = msd.TransformedDistribution(exp, normal, seed=0, name="LogNormal") - -# compute cumulative distribution function -x = np.array([2.0, 5.0, 10.0], dtype=np.float32) -tx = ms.Tensor(x, dtype=ms.float32) -cdf = LogNormal.cdf(tx) - -# generate samples from the distribution -shape = ((3, 2)) -sample = LogNormal.sample(shape) - -# get information of the distribution -print(LogNormal) -# get information of the underlying distribution and the bijector separately -print("underlying distribution:\n", LogNormal.distribution) -print("bijector:\n", LogNormal.bijector) -# get the computation results -print("cdf:\n", cdf) -print("sample:\n", sample.shape) -``` - -The output is as follows: - -```text -TransformedDistribution< - (_bijector): Exp - (_distribution): Normal - > -underlying distribution: - Normal -bijector: - Exp -cdf: - [0.7558914 0.9462397 0.9893489] -sample: - (3, 2) -``` - -When the `TransformedDistribution` is constructed to map the transformed `is_constant_jacobian = true` (for example, `ScalarAffine`), the constructed `TransformedDistribution` instance can use the `mean` API to calculate the average value. For example: - -```python -normal = msd.Normal(0.0, 1.0, dtype=ms.float32) -scalaraffine = msb.ScalarAffine(1.0, 2.0) -trans_dist = msd.TransformedDistribution(scalaraffine, normal, seed=0) -mean = trans_dist.mean() -print(mean) -``` - -The output is as follows: - -```text -2.0 -``` - -### Invoking a TransformedDistribution Instance in Graph Mode - -In graph mode, the `TransformedDistribution` class can be used on the network. - -```python -import numpy as np -import mindspore.nn as nn -import mindspore as ms -import mindspore.nn.probability.bijector as msb -import mindspore.nn.probability.distribution as msd -ms.set_context(mode=ms.GRAPH_MODE) - -class Net(nn.Cell): - def __init__(self, shape, dtype=ms.float32, seed=0, name='transformed_distribution'): - super(Net, self).__init__() - # create TransformedDistribution distribution - self.exp = msb.Exp() - self.normal = msd.Normal(0.0, 1.0, dtype=dtype) - self.lognormal = msd.TransformedDistribution(self.exp, self.normal, seed=seed, name=name) - self.shape = shape - - def construct(self, value): - cdf = self.lognormal.cdf(value) - sample = self.lognormal.sample(self.shape) - return cdf, sample - -shape = (2, 3) -net = Net(shape=shape, name="LogNormal") -x = np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32) -tx = ms.Tensor(x, dtype=ms.float32) -cdf, sample = net(tx) -print("cdf: ", cdf) -print("sample: ", sample.shape) -``` - -The output is as follows: - -```text -cdf: [0.7558914 0.86403143 0.9171715 0.9462397 ] -sample: (2, 3) -``` - -## Probability Distribution Mapping - -Bijector (`mindspore.nn.probability.bijector`) is a basic component of probability programming. Bijector describes a random variable transformation method, and a new random variable $Y = f(x)$ may be generated by using an existing random variable X and a mapping function f. -`Bijector` provides four mapping-related transformation methods. It can be directly used as an operator, or used to generate a `Distribution` class instance of a new random variable on an existing `Distribution` class instance. - -### Bijector API Design - -#### Bijector Base Class - -The `Bijector` class is the base class for all probability distribution mappings. The APIs are as follows: - -1. Properties - - `name`: returns the value of `name`. - - `is_dtype`: returns the value of `dtype`. - - `parameters`: returns the value of `parameter`. - - `is_constant_jacobian`: returns the value of `is_constant_jacobian`. - - `is_injective`: returns the value of `is_injective`. - -2. Mapping functions - - `forward`: forward mapping, whose parameter is determined by `_forward` of the derived class. - - `inverse`: backward mapping, whose parameter is determined by`_inverse` of the derived class. - - `forward_log_jacobian`: logarithm of the derivative of the forward mapping, whose parameter is determined by `_forward_log_jacobian` of the derived class. - - `inverse_log_jacobian`: logarithm of the derivative of the backward mapping, whose parameter is determined by `_inverse_log_jacobian` of the derived class. - -When `Bijector` is invoked as a function: -The input is a `Distribution` class and a `TransformedDistribution` is generated **(cannot be invoked in a graph)**. - -#### PowerTransform - -`PowerTransform` implements variable transformation with $Y = g(X) = {(1 + X * c)}^{1 / c}$. The APIs are as follows: - -1. Properties - - `power`: returns the value of `power` as a `Tensor`. - -2. Mapping functions - - `forward`: forward mapping, with an input parameter `Tensor`. - - `inverse`: backward mapping, with an input parameter `Tensor`. - - `forward_log_jacobian`: logarithm of the derivative of the forward mapping, with an input parameter `Tensor`. - - `inverse_log_jacobian`: logarithm of the derivative of the backward mapping, with an input parameter `Tensor`. - -#### Exp - -`Exp` implements variable transformation with $Y = g(X)= exp(X)$. The APIs are as follows: - -Mapping functions - -- `forward`: forward mapping, with an input parameter `Tensor`. -- `inverse`: backward mapping, with an input parameter `Tensor`. -- `forward_log_jacobian`: logarithm of the derivative of the forward mapping, with an input parameter `Tensor`. -- `inverse_log_jacobian`: logarithm of the derivative of the backward mapping, with an input parameter `Tensor`. - -#### ScalarAffine - -`ScalarAffine` implements variable transformation with Y = g(X) = a * X + b. The APIs are as follows: - -1. Properties - - `scale`: returns the value of scale as a `Tensor`. - - `shift`: returns the value of shift as a `Tensor`. - -2. Mapping functions - - `forward`: forward mapping, with an input parameter `Tensor`. - - `inverse`: backward mapping, with an input parameter `Tensor`. - - `forward_log_jacobian`: logarithm of the derivative of the forward mapping, with an input parameter `Tensor`. - - `inverse_log_jacobian`: logarithm of the derivative of the backward mapping, with an input parameter `Tensor`. - -#### Softplus - -`Softplus` implements variable transformation with $Y = g(X) = log(1 + e ^ {kX}) / k $. The APIs are as follows: - -1. Properties - - `sharpness`: returns the value of `sharpness` as a `Tensor`. - -2. Mapping functions - - `forward`: forward mapping, with an input parameter `Tensor`. - - `inverse`: backward mapping, with an input parameter `Tensor`. - - `forward_log_jacobian`: logarithm of the derivative of the forward mapping, with an input parameter `Tensor`. - - `inverse_log_jacobian`: logarithm of the derivative of the backward mapping, with an input parameter `Tensor`. - -#### GumbelCDF - -`GumbelCDF` implements variable transformation with $Y = g(X) = \exp(-\exp(-\frac{X - loc}{scale}))$. The APIs are as follows: - -1. Properties - - `loc`: returns the value of `loc` as a `Tensor`. - - `scale`: returns the value of `scale` as a `Tensor`. - -2. Mapping functions - - `forward`: forward mapping, with an input parameter `Tensor`. - - `inverse`: backward mapping, with an input parameter `Tensor`. - - `forward_log_jacobian`: logarithm of the derivative of the forward mapping, with an input parameter `Tensor`. - - `inverse_log_jacobian`: logarithm of the derivative of the backward mapping, with an input parameter `Tensor`. - -#### Invert - -`Invert` implements the inverse of another bijector. The APIs are as follows: - -1. Properties - - `bijector`: returns the Bijector used during initialization with type `Bijector`. - -2. Mapping functions - - `forward`: forward mapping, with an input parameter `Tensor`. - - `inverse`: backward mapping, with an input parameter `Tensor`. - - `forward_log_jacobian`: logarithm of the derivative of the forward mapping, with an input parameter `Tensor`. - - `inverse_log_jacobian`: logarithm of the derivative of the backward mapping, with an input parameter `Tensor`. - -### Invoking the Bijector Instance in PyNative Mode - -Before the execution, import the required library file package. The main library of the Bijector class is `mindspore.nn.probability.bijector`. After the library is imported, `msb` is used as the abbreviation of the library for invoking. - -The following uses `PowerTransform` as an example. Create a `PowerTransform` object whose power is 2. - -```python -import numpy as np -import mindspore.nn as nn -import mindspore.nn.probability.bijector as msb -import mindspore as ms - -ms.set_context(mode=ms.PYNATIVE_MODE) - -powertransform = msb.PowerTransform(power=2.) - -x = np.array([2.0, 3.0, 4.0, 5.0], dtype=np.float32) -tx = ms.Tensor(x, dtype=ms.float32) -forward = powertransform.forward(tx) -inverse = powertransform.inverse(tx) -forward_log_jaco = powertransform.forward_log_jacobian(tx) -inverse_log_jaco = powertransform.inverse_log_jacobian(tx) - -print(powertransform) -print("forward: ", forward) -print("inverse: ", inverse) -print("forward_log_jacobian: ", forward_log_jaco) -print("inverse_log_jacobian: ", inverse_log_jaco) -``` - -The output is as follows: - -```text -PowerTransform -forward: [2.236068 2.6457515 3. 3.3166249] -inverse: [ 1.5 4. 7.5 12.000001] -forward_log_jacobian: [-0.804719 -0.9729551 -1.0986123 -1.1989477] -inverse_log_jacobian: [0.6931472 1.0986123 1.3862944 1.609438 ] -``` - -### Invoking a Bijector Instance in Graph Mode - -In graph mode, the `Bijector` subclass can be used on the network. - -```python -import numpy as np -import mindspore.nn as nn -import mindspore as ms -import mindspore.nn.probability.bijector as msb -ms.set_context(mode=ms.GRAPH_MODE) - -class Net(nn.Cell): - def __init__(self): - super(Net, self).__init__() - # create a PowerTransform bijector - self.powertransform = msb.PowerTransform(power=2.) - - def construct(self, value): - forward = self.powertransform.forward(value) - inverse = self.powertransform.inverse(value) - forward_log_jaco = self.powertransform.forward_log_jacobian(value) - inverse_log_jaco = self.powertransform.inverse_log_jacobian(value) - return forward, inverse, forward_log_jaco, inverse_log_jaco - -net = Net() -x = np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32) -tx = ms.Tensor(x, dtype=ms.float32) -forward, inverse, forward_log_jaco, inverse_log_jaco = net(tx) -print("forward: ", forward) -print("inverse: ", inverse) -print("forward_log_jacobian: ", forward_log_jaco) -print("inverse_log_jacobian: ", inverse_log_jaco) -``` - -The output is as follows: - -```text -forward: [2.236068 2.6457515 3. 3.3166249] -inverse: [ 1.5 4. 7.5 12.000001] -forward_log_jacobian: [-0.804719 -0.9729551 -1.0986123 -1.1989477] -inverse_log_jacobian: [0.6931472 1.0986123 1.3862944 1.609438 ] -``` diff --git a/docs/probability/docs/source_en/probability_en.png b/docs/probability/docs/source_en/probability_en.png deleted file mode 100644 index 562876ea6f23322bbe1975757fc0dfe18dc9efc4..0000000000000000000000000000000000000000 Binary files a/docs/probability/docs/source_en/probability_en.png and /dev/null differ diff --git a/docs/probability/docs/source_en/using_bnn.md b/docs/probability/docs/source_en/using_bnn.md deleted file mode 100644 index ed36f6a6d74484c010ab7976ef196eaa5b7c308f..0000000000000000000000000000000000000000 --- a/docs/probability/docs/source_en/using_bnn.md +++ /dev/null @@ -1,280 +0,0 @@ -# Using BNN to Implement an Image Classification Application - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/probability/docs/source_en/using_bnn.md) - -Deep learning models have a strong fitting capability, while Bayesian theory has a good explainable capability. MindSpore Deep Probability Programming combines deep learning and Bayesian learning. By setting the network weight to distribution, introducing hidden space distribution, etc., the distribution can be sampled forward propagation, which introduces uncertainty and enhances The robustness and interpretability of the model are improved. - -This chapter will introduce in detail the application of Bayesian neural network in deep probabilistic programming on MindSpore. Before starting to practice, make sure that you have correctly installed MindSpore 0.7.0-beta and above. - -> This example is for the GPU or Atlas training series platform. You can download the complete sample code from . -> BNN only supports GRAPH mode now, please set `set_context(mode=GRAPH_MODE)` in your code. - -## Using BNN - -BNN is a basic model composed of probabilistic model and neural network. Its weight is not a definite value, but a distribution. The following example describes how to use the `bnn_layers` module in MDP to implement a BNN, and then use the BNN to implement a simple image classification function. The overall process is as follows: - -1. Process the MNIST dataset. -2. Define the Bayes LeNet. -3. Define the loss function and optimizer. -4. Load and train the dataset. - -## Environment Preparation - -Set the training mode to graph mode and the computing platform to GPU. - -```python -import mindspore as ms - -ms.set_context(mode=ms.GRAPH_MODE, save_graphs=False, device_target="GPU") -``` - -## Data Preparation - -### Downloading the Dataset - -Download the MNIST dataset and unzip it to the specified location, execute the following command: - -```python -import os -import requests - -requests.packages.urllib3.disable_warnings() -def download_dataset(dataset_url, path): - filename = dataset_url.split("/")[-1] - save_path = os.path.join(path, filename) - if os.path.exists(save_path): - return - if not os.path.exists(path): - os.makedirs(path) - res = requests.get(dataset_url, stream=True, verify=False) - with open(save_path, "wb") as f: - for chunk in res.iter_content(chunk_size=512): - if chunk: - f.write(chunk) - print("The {} file is downloaded and saved in the path {} after processing".format(os.path.basename(dataset_url), path)) - -train_path = "datasets/MNIST_Data/train" -test_path = "datasets/MNIST_Data/test" - -download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-labels-idx1-ubyte", train_path) -download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-images-idx3-ubyte", train_path) -download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-labels-idx1-ubyte", test_path) -download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-images-idx3-ubyte", test_path) -``` - -The directory structure of the downloaded dataset file is as follows: - -```text -./datasets/MNIST_Data -├── test -│ ├── t10k-images-idx3-ubyte -│ └── t10k-labels-idx1-ubyte -└── train - ├── train-images-idx3-ubyte - └── train-labels-idx1-ubyte -``` - -### Defining the Dataset Enhancement Method - -The original training dataset of the MNIST dataset is 60,000 single-channel digital images with $28\times28$ pixels. The LeNet5 network containing the Bayesian layer used in this training received the training data tensor as `(32,1 ,32,32)`, through the custom create_dataset function to enhance the original dataset to meet the training requirements of the data, the specific enhancement operation explanation can refer to [Quick Start for Beginners](https://www.mindspore.cn/tutorials/en/master/beginner/quick_start.html). - -```python -import mindspore.dataset.vision as vision -import mindspore.dataset.transforms as transforms -from mindspore.dataset.vision import Inter -from mindspore import dataset as ds - -def create_dataset(data_path, batch_size=32, repeat_size=1, - num_parallel_workers=1): - # define dataset - mnist_ds = ds.MnistDataset(data_path) - - # define some parameters needed for data enhancement and rough justification - resize_height, resize_width = 32, 32 - rescale = 1.0 / 255.0 - shift = 0.0 - rescale_nml = 1 / 0.3081 - shift_nml = -1 * 0.1307 / 0.3081 - - # according to the parameters, generate the corresponding data enhancement method - c_trans = [ - vision.Resize((resize_height, resize_width), interpolation=Inter.LINEAR), - vision.Rescale(rescale_nml, shift_nml), - vision.Rescale(rescale, shift), - vision.HWC2CHW() - ] - type_cast_op = transforms.TypeCast(ms.int32) - - # using map to apply operations to a dataset - mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns="label", num_parallel_workers=num_parallel_workers) - mnist_ds = mnist_ds.map(operations=c_trans, input_columns="image", num_parallel_workers=num_parallel_workers) - - # process the generated dataset - buffer_size = 10000 - mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size) - mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True) - mnist_ds = mnist_ds.repeat(repeat_size) - - return mnist_ds -``` - -## Defining the BNN - -In the classic LeNet5 network, the data goes through the following calculation process: convolution 1->activation->pooling->convolution 2->activation->pooling->dimensionality reduction->full connection 1->full connection 2-> Fully connected 3. - -In this example, a probabilistic programming method will be introduced, using the `bnn_layers` module to transform the convolutional layer and the fully connected layer into a Bayesian layer. - -```python -import mindspore.nn as nn -from mindspore.nn.probability import bnn_layers -import mindspore.ops as ops -import mindspore as ms - - -class BNNLeNet5(nn.Cell): - def __init__(self, num_class=10): - super(BNNLeNet5, self).__init__() - self.num_class = num_class - self.conv1 = bnn_layers.ConvReparam(1, 6, 5, stride=1, padding=0, has_bias=False, pad_mode="valid") - self.conv2 = bnn_layers.ConvReparam(6, 16, 5, stride=1, padding=0, has_bias=False, pad_mode="valid") - self.fc1 = bnn_layers.DenseReparam(16 * 5 * 5, 120) - self.fc2 = bnn_layers.DenseReparam(120, 84) - self.fc3 = bnn_layers.DenseReparam(84, self.num_class) - self.relu = nn.ReLU() - self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2) - self.flatten = nn.Flatten() - - def construct(self, x): - x = self.max_pool2d(self.relu(self.conv1(x))) - x = self.max_pool2d(self.relu(self.conv2(x))) - x = self.flatten(x) - x = self.relu(self.fc1(x)) - x = self.relu(self.fc2(x)) - x = self.fc3(x) - return x - -network = BNNLeNet5(num_class=10) -for layer in network.trainable_params(): - print(layer.name) -``` - -```text -conv1.weight_posterior.mean -conv1.weight_posterior.untransformed_std -conv2.weight_posterior.mean -conv2.weight_posterior.untransformed_std -fc1.weight_posterior.mean -fc1.weight_posterior.untransformed_std -fc1.bias_posterior.mean -fc1.bias_posterior.untransformed_std -fc2.weight_posterior.mean -fc2.weight_posterior.untransformed_std -fc2.bias_posterior.mean -fc2.bias_posterior.untransformed_std -fc3.weight_posterior.mean -fc3.weight_posterior.untransformed_std -fc3.bias_posterior.mean -fc3.bias_posterior.untransformed_std -``` - -The printed information shows that the convolutional layer and the fully connected layer of the LeNet network constructed with the `bnn_layers` module are both Bayesian layers. - -## Defining the Loss Function and Optimizer - -Next, you need to define the loss function and the optimizer. The loss function is the training target of deep learning, also called the objective function. It can be understood as the distance between the output of the neural network (Logits) and the label (Labels), which is a scalar data. - -Common loss functions include mean square error, L2 loss, Hinge loss, and cross entropy. Cross entropy is usually used for image classification. - -The optimizer is used for neural network solution (training). Because of the large scale of neural network parameters, the stochastic gradient descent (SGD) algorithm and its improved algorithm are used in deep learning to solve the problem. MindSpore encapsulates common optimizers, such as `SGD`, `Adam`, and `Momemtum`. In this example, the `Adam` optimizer is used. Generally, two parameters need to be set: learning rate (`learning_rate`) and weight attenuation (`weight_decay`). - -An example of the code for defining the loss function and optimizer in MindSpore is as follows: - -```python -import mindspore.nn as nn - -# loss function definition -criterion = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean") - -# optimization definition -optimizer = nn.AdamWeightDecay(params=network.trainable_params(), learning_rate=0.0001) -``` - -## Training the Network - -The training process of BNN is similar to that of DNN. The only difference is that `WithLossCell` is replaced with `WithBNNLossCell` applicable to BNN. In addition to the `backbone` and `loss_fn` parameters, the `dnn_factor` and `bnn_factor` parameters are added to `WithBNNLossCell`. `dnn_factor` is a coefficient of the overall network loss computed by a loss function, and `bnn_factor` is a coefficient of the KL divergence of each Bayesian layer. The two parameters are used to balance the overall network loss and the KL divergence of the Bayesian layer, preventing the overall network loss from being covered by a large KL divergence. - -- `dnn_factor` is the coefficient of the overall network loss calculated by the loss function. -- `bnn_factor` is the coefficient of the KL divergence of each Bayesian layer. - -The code examples of `train_model` and `validate_model` in MindSpore are as follows: - -```python -def train_model(train_net, net, dataset): - accs = [] - loss_sum = 0 - for _, data in enumerate(dataset.create_dict_iterator()): - train_x = ms.Tensor(data['image'].asnumpy().astype(np.float32)) - label = ms.Tensor(data['label'].asnumpy().astype(np.int32)) - loss = train_net(train_x, label) - output = net(train_x) - log_output = ops.LogSoftmax(axis=1)(output) - acc = np.mean(log_output.asnumpy().argmax(axis=1) == label.asnumpy()) - accs.append(acc) - loss_sum += loss.asnumpy() - - loss_sum = loss_sum / len(accs) - acc_mean = np.mean(accs) - return loss_sum, acc_mean - - -def validate_model(net, dataset): - accs = [] - for _, data in enumerate(dataset.create_dict_iterator()): - train_x = ms.Tensor(data['image'].asnumpy().astype(np.float32)) - label = ms.Tensor(data['label'].asnumpy().astype(np.int32)) - output = net(train_x) - log_output = ops.LogSoftmax(axis=1)(output) - acc = np.mean(log_output.asnumpy().argmax(axis=1) == label.asnumpy()) - accs.append(acc) - - acc_mean = np.mean(accs) - return acc_mean -``` - -Perform training. - -```python -from mindspore.nn import TrainOneStepCell -import mindspore as ms -import numpy as np - -net_with_loss = bnn_layers.WithBNNLossCell(network, criterion, dnn_factor=60000, bnn_factor=0.000001) -train_bnn_network = TrainOneStepCell(net_with_loss, optimizer) -train_bnn_network.set_train() - -train_set = create_dataset('./datasets/MNIST_Data/train', 64, 1) -test_set = create_dataset('./datasets/MNIST_Data/test', 64, 1) - -epoch = 10 - -for i in range(epoch): - train_loss, train_acc = train_model(train_bnn_network, network, train_set) - - valid_acc = validate_model(network, test_set) - - print('Epoch: {} \tTraining Loss: {:.4f} \tTraining Accuracy: {:.4f} \tvalidation Accuracy: {:.4f}'. - format(i+1, train_loss, train_acc, valid_acc)) -``` - -```text -Epoch: 1 Training Loss: 21444.8605 Training Accuracy: 0.8928 validation Accuracy: 0.9513 -Epoch: 2 Training Loss: 9396.3887 Training Accuracy: 0.9536 validation Accuracy: 0.9635 -Epoch: 3 Training Loss: 7320.2412 Training Accuracy: 0.9641 validation Accuracy: 0.9674 -Epoch: 4 Training Loss: 6221.6970 Training Accuracy: 0.9685 validation Accuracy: 0.9731 -Epoch: 5 Training Loss: 5450.9543 Training Accuracy: 0.9725 validation Accuracy: 0.9733 -Epoch: 6 Training Loss: 4898.9741 Training Accuracy: 0.9754 validation Accuracy: 0.9767 -Epoch: 7 Training Loss: 4505.7502 Training Accuracy: 0.9775 validation Accuracy: 0.9784 -Epoch: 8 Training Loss: 4099.8783 Training Accuracy: 0.9797 validation Accuracy: 0.9791 -Epoch: 9 Training Loss: 3795.2288 Training Accuracy: 0.9810 validation Accuracy: 0.9796 -Epoch: 10 Training Loss: 3581.4254 Training Accuracy: 0.9823 validation Accuracy: 0.9773 -``` diff --git a/docs/probability/docs/source_zh_cn/_templates/classtemplate.rst b/docs/probability/docs/source_zh_cn/_templates/classtemplate.rst deleted file mode 100644 index 962ee2c6d63038c052b4e9d4a6e70d211f639282..0000000000000000000000000000000000000000 --- a/docs/probability/docs/source_zh_cn/_templates/classtemplate.rst +++ /dev/null @@ -1,26 +0,0 @@ -.. role:: hidden - :class: hidden-section - -.. currentmodule:: {{ module }} - -{% if objname in [] %} -{{ fullname | underline }} - -.. autofunction:: {{ fullname }} - -{% elif objname[0].istitle() %} -{{ fullname | underline }} - -.. autoclass:: {{ name }} - :members: - -{% else %} -{{ fullname | underline }} - -.. autofunction:: {{ fullname }} - -{% endif %} - -.. - autogenerated from _templates/classtemplate.rst - note it does not have :inherited-members: diff --git a/docs/probability/docs/source_zh_cn/_templates/classtemplate_inherited.rst b/docs/probability/docs/source_zh_cn/_templates/classtemplate_inherited.rst deleted file mode 100644 index 1cbd0ca2614acf4464bdbf11678100564d04b3c4..0000000000000000000000000000000000000000 --- a/docs/probability/docs/source_zh_cn/_templates/classtemplate_inherited.rst +++ /dev/null @@ -1,27 +0,0 @@ -.. role:: hidden - :class: hidden-section - -.. currentmodule:: {{ module }} - -{% if objname[0].istitle() %} -{{ fullname | underline }} - -.. autoclass:: {{ name }} - :inherited-members: - :members: - -{% elif fullname=="mindspore.numpy.ix\_" %} - -mindspore.numpy.ix\_ -==================== - -.. autofunction:: mindspore.numpy.ix_ - -{% else %} -{{ fullname | underline }} - -.. autofunction:: {{ fullname }} - -{% endif %} - -.. autogenerated from _templates/classtemplate_inherited.rst \ No newline at end of file diff --git a/docs/probability/docs/source_zh_cn/_templates/classtemplate_probability.rst b/docs/probability/docs/source_zh_cn/_templates/classtemplate_probability.rst deleted file mode 100644 index 6329880e1fc540de910b25d1724a2cfba8d501f2..0000000000000000000000000000000000000000 --- a/docs/probability/docs/source_zh_cn/_templates/classtemplate_probability.rst +++ /dev/null @@ -1,13 +0,0 @@ -.. role:: hidden - :class: hidden-section - -.. currentmodule:: {{ module }} - -{{ fullname | underline }} - -.. autoclass:: {{ name }} - :members: - -.. - autogenerated from _templates/classtemplate.rst - note it does not have :inherited-members: diff --git a/docs/probability/docs/source_zh_cn/conf.py b/docs/probability/docs/source_zh_cn/conf.py deleted file mode 100644 index e970f113bfdbc2a4986509580d7822b2b0eb58e5..0000000000000000000000000000000000000000 --- a/docs/probability/docs/source_zh_cn/conf.py +++ /dev/null @@ -1,249 +0,0 @@ -# Configuration file for the Sphinx documentation builder. -# -# This file only contains a selection of the most common options. For a full -# list see the documentation: -# https://www.sphinx-doc.org/en/master/usage/configuration.html - -# -- Path setup -------------------------------------------------------------- - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# -import glob -import os -import shutil -import sys -import IPython -import re -import mindspore -sys.path.append(os.path.abspath('../_ext')) -import sphinx.ext.autosummary.generate as g -from sphinx.ext import autodoc as sphinx_autodoc - - -# -- Project information ----------------------------------------------------- - -project = 'MindSpore Probability' -copyright = 'MindSpore' -author = 'MindSpore' - -# The full version, including alpha/beta/rc tags -release = 'master' - - -# -- General configuration --------------------------------------------------- - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -myst_enable_extensions = ["dollarmath", "amsmath"] - - -myst_heading_anchors = 5 -extensions = [ - 'sphinx.ext.autodoc', - 'sphinx.ext.autosummary', - 'sphinx.ext.doctest', - 'sphinx.ext.intersphinx', - 'sphinx.ext.todo', - 'sphinx.ext.coverage', - 'sphinx.ext.napoleon', - 'sphinx.ext.viewcode', - 'myst_parser', - 'nbsphinx', - 'sphinx.ext.mathjax', - 'IPython.sphinxext.ipython_console_highlighting' -] - -source_suffix = { - '.rst': 'restructuredtext', - '.md': 'markdown', -} - -# Add any paths that contain templates here, relative to this directory. -templates_path = ['_templates'] - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This pattern also affects html_static_path and html_extra_path. -mathjax_path = 'https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/mathjax/MathJax-3.2.2/es5/tex-mml-chtml.js' - -mathjax_options = { - 'async':'async' -} - -nbsphinx_requirejs_path = 'https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.6/require.min.js' - -nbsphinx_requirejs_options = { - "crossorigin": "anonymous", - "integrity": "sha256-1fEPhSsRKlFKGfK3eO710tEweHh1fwokU5wFGDHO+vg=" -} - -smartquotes_action = 'De' - -exclude_patterns = [] - -suppress_warnings = [ - 'nbsphinx', -] - -pygments_style = 'sphinx' - -autodoc_inherit_docstrings = False - -autosummary_generate = True - -autosummary_generate_overwrite = False - -# -- Options for HTML output ------------------------------------------------- - -# Reconstruction of sphinx auto generated document translation. -language = 'zh_CN' -locale_dirs = ['../../../../resource/locale/'] -gettext_compact = False - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = 'sphinx_rtd_theme' - -html_search_language = 'zh' - -html_search_options = {'dict': '../../../resource/jieba.txt'} - -# Example configuration for intersphinx: refer to the Python standard library. -intersphinx_mapping = { - 'python': ('https://docs.python.org/3', '../../../../resource/python_objects.inv'), -} - -from sphinx import directives -with open('../_ext/overwriteobjectiondirective.txt', 'r', encoding="utf8") as f: - exec(f.read(), directives.__dict__) - -from sphinx.ext import viewcode -with open('../_ext/overwriteviewcode.txt', 'r', encoding="utf8") as f: - exec(f.read(), viewcode.__dict__) - -# Modify regex for sphinx.ext.autosummary.generate.find_autosummary_in_lines. -gfile_abs_path = os.path.abspath(g.__file__) -autosummary_re_line_old = r"autosummary_re = re.compile(r'^(\s*)\.\.\s+autosummary::\s*')" -autosummary_re_line_new = r"autosummary_re = re.compile(r'^(\s*)\.\.\s+(ms[a-z]*)?autosummary::\s*')" -with open(gfile_abs_path, "r+", encoding="utf8") as f: - data = f.read() - data = data.replace(autosummary_re_line_old, autosummary_re_line_new) - exec(data, g.__dict__) - -# Modify default signatures for autodoc. -autodoc_source_path = os.path.abspath(sphinx_autodoc.__file__) -autodoc_source_re = re.compile(r'stringify_signature\(.*?\)') -get_param_func_str = r"""\ -import re -import inspect as inspect_ - -def get_param_func(func): - try: - source_code = inspect_.getsource(func) - if func.__doc__: - source_code = source_code.replace(func.__doc__, '') - all_params_str = re.findall(r"def [\w_\d\-]+\(([\S\s]*?)(\):|\) ->.*?:)", source_code) - all_params = re.sub("(self|cls)(,|, )?", '', all_params_str[0][0].replace("\n", "").replace("'", "\"")) - return all_params - except: - return '' - -def get_obj(obj): - if isinstance(obj, type): - return obj.__init__ - - return obj -""" - -with open(autodoc_source_path, "r+", encoding="utf8") as f: - code_str = f.read() - code_str = autodoc_source_re.sub('"(" + get_param_func(get_obj(self.object)) + ")"', code_str, count=0) - exec(get_param_func_str, sphinx_autodoc.__dict__) - exec(code_str, sphinx_autodoc.__dict__) - -sys.path.append(os.path.abspath('../../../../resource/sphinx_ext')) -# import anchor_mod -import nbsphinx_mod - - -sys.path.append(os.path.abspath('../../../../resource/search')) -import search_code - -# Copy source files of chinese python api from mindspore repository. -from sphinx.util import logging -logger = logging.getLogger(__name__) - -copy_path = 'docs/api/api_python/probability' -src_dir = os.path.join(os.getenv("MS_PATH"), copy_path) -des_sir = "./nn_probability" - -if not os.path.exists(src_dir): - logger.warning(f"不存在目录:{src_dir}!") -if os.path.exists(des_sir): - shutil.rmtree(des_sir) -shutil.copytree(src_dir, des_sir) - -# add view -import json - -if os.path.exists('../../../../tools/generate_html/version.json'): - with open('../../../../tools/generate_html/version.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) -elif os.path.exists('../../../../tools/generate_html/daily_dev.json'): - with open('../../../../tools/generate_html/daily_dev.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) -elif os.path.exists('../../../../tools/generate_html/daily.json'): - with open('../../../../tools/generate_html/daily.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) - -if os.getenv("MS_PATH").split('/')[-1]: - copy_repo = os.getenv("MS_PATH").split('/')[-1] -else: - copy_repo = os.getenv("MS_PATH").split('/')[-2] - -branch = [version_inf[i]['branch'] for i in range(len(version_inf)) if version_inf[i]['name'] == copy_repo][0] -docs_branch = [version_inf[i]['branch'] for i in range(len(version_inf)) if version_inf[i]['name'] == 'tutorials'][0] - -re_view = f"\n.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/{docs_branch}/" + \ - f"resource/_static/logo_source.svg\n :target: https://gitee.com/mindspore/{copy_repo}/blob/{branch}/" - -for cur, _, files in os.walk(des_sir): - for i in files: - if i.endswith('.rst'): - try: - with open(os.path.join(cur, i), 'r+', encoding='utf-8') as f: - content = f.read() - new_content = content - if '.. include::' in content and '.. automodule::' in content: - continue - if 'autosummary::' not in content and "\n=====" in content: - re_view_ = re_view + copy_path + cur.split('nn_probability')[-1] + '/' + i + \ - '\n :alt: 查看源文件\n\n' - new_content = re.sub('([=]{5,})\n', r'\1\n' + re_view_, content, 1) - if new_content != content: - f.seek(0) - f.truncate() - f.write(new_content) - except Exception: - print(f'打开{i}文件失败') - -import mindspore - -sys.path.append(os.path.abspath('../../../../resource/custom_directives')) -from custom_directives import IncludeCodeDirective -from myautosummary import MsPlatformAutoSummary, MsNoteAutoSummary, MsCnAutoSummary, MsCnPlatformAutoSummary, MsCnNoteAutoSummary - -rst_files = set([i.replace('.rst', '') for i in glob.glob('nn_probability/*.rst', recursive=True)]) - -def setup(app): - app.add_directive('msplatformautosummary', MsPlatformAutoSummary) - app.add_directive('msnoteautosummary', MsNoteAutoSummary) - app.add_directive('includecode', IncludeCodeDirective) - app.add_directive('mscnautosummary', MsCnAutoSummary) - app.add_directive('mscnplatformautosummary', MsCnPlatformAutoSummary) - app.add_directive('mscnnoteautosummary', MsCnNoteAutoSummary) - app.add_config_value('rst_files', set(), False) diff --git a/docs/probability/docs/source_zh_cn/index.rst b/docs/probability/docs/source_zh_cn/index.rst deleted file mode 100644 index 799a0e97efc479d73b23b844f56729ab8b402bd6..0000000000000000000000000000000000000000 --- a/docs/probability/docs/source_zh_cn/index.rst +++ /dev/null @@ -1,41 +0,0 @@ -MindSpore Probability文档 -========================== - -MindSpore Probability是贝叶斯学习和深度学习融合套件,“无缝”融合了深度学习模型强大的拟合能力和贝叶斯理论的可解释能力,提供完善的概率学习库,用于建立概率模型和应用贝叶斯推理。 - -MindSpore Probability 概率编程主要包括以下几部分: - -- 提供丰富的统计分布和常用的概率推断算法。 -- 提供可组合的概率编程模块,让开发者可以用开发深度学习模型的逻辑来构造深度概率模型。 - -.. raw:: html - - - -代码仓地址: - -使用概率编程的典型场景 ------------------------ - -1. `构建贝叶斯神经网络 `_ - - 利用贝叶斯神经网络实现图片分类应用。 - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: 安装部署 - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: 使用指南 - - using_bnn - probability - -.. toctree:: - :maxdepth: 1 - :caption: API参考 - - mindspore.nn.probability diff --git a/docs/probability/docs/source_zh_cn/mindspore.nn.probability.rst b/docs/probability/docs/source_zh_cn/mindspore.nn.probability.rst deleted file mode 100644 index 6c097cd51cacb85730bd6e6186967d5b41d3e0b5..0000000000000000000000000000000000000000 --- a/docs/probability/docs/source_zh_cn/mindspore.nn.probability.rst +++ /dev/null @@ -1,43 +0,0 @@ -mindspore.nn.probability -================================ - -.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg - :target: https://gitee.com/mindspore/docs/blob/master/docs/probability/docs/source_zh_cn/mindspore.nn.probability.rst - :alt: 查看源文件 - -概率。 - -用于构建概率网络的高级组件。 - -Bayesian Layers ---------------- - -.. mscnplatformautosummary:: - :toctree: nn_probability - :nosignatures: - :template: classtemplate_probability.rst - - mindspore.nn.probability.bnn_layers.ConvReparam - mindspore.nn.probability.bnn_layers.DenseLocalReparam - mindspore.nn.probability.bnn_layers.DenseReparam - -Prior and Posterior Distributions ----------------------------------- - -.. mscnplatformautosummary:: - :toctree: nn_probability - :nosignatures: - :template: classtemplate_probability.rst - - mindspore.nn.probability.bnn_layers.NormalPosterior - mindspore.nn.probability.bnn_layers.NormalPrior - -Bayesian Wrapper Functions ---------------------------- - -.. mscnplatformautosummary:: - :toctree: nn_probability - :nosignatures: - :template: classtemplate_probability.rst - - mindspore.nn.probability.bnn_layers.WithBNNLossCell diff --git a/docs/probability/docs/source_zh_cn/probability.ipynb b/docs/probability/docs/source_zh_cn/probability.ipynb deleted file mode 100644 index b8baba176603c50d5cd555de13dd30f7f389b6eb..0000000000000000000000000000000000000000 --- a/docs/probability/docs/source_zh_cn/probability.ipynb +++ /dev/null @@ -1,976 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# 深度概率编程库\n", - "\n", - "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/probability/zh_cn/mindspore_probability.ipynb) [![下载样例代码](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_download_code.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/probability/zh_cn/mindspore_probability.py) [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/probability/docs/source_zh_cn/probability.ipynb)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "MindSpore深度概率编程的目标是将深度学习和贝叶斯学习结合,包括概率分布、概率分布映射、深度概率网络、概率推断算法、贝叶斯层、贝叶斯转换和贝叶斯工具箱,面向不同的开发者。对于专业的贝叶斯学习用户,提供概率采样、推理算法和模型构建库;另一方面,为不熟悉贝叶斯深度学习的用户提供了高级的API,从而不用更改深度学习编程逻辑,即可利用贝叶斯模型。" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 概率分布\n", - "\n", - "概率分布(`mindspore.nn.probability.distribution`)是概率编程的基础。`Distribution`类提供多样的概率统计接口,例如概率密度函数`pdf`、累积密度函数`cdf`、散度计算`kl_loss`、抽样`sample`等。现有的概率分布实例包括高斯分布,伯努利分布,指数型分布,几何分布和均匀分布。\n", - "\n", - "### 概率分布类\n", - "\n", - "- `Distribution`:所有概率分布的基类。\n", - "\n", - "- `Bernoulli`:伯努利分布。参数为试验成功的概率。\n", - "\n", - "- `Exponential`:指数型分布。参数为率参数。\n", - "\n", - "- `Geometric`:几何分布。参数为一次伯努利试验成功的概率。\n", - "\n", - "- `Normal`:正态(高斯)分布。参数为均值和标准差。\n", - "\n", - "- `Uniform`:均匀分布。参数为数轴上的最小值和最大值。\n", - "\n", - "- `Categorical`:类别分布。每种类别出现的概率。\n", - "\n", - "- `LogNormal`:对数正态分布。参数为位置参数和规模参数。\n", - "\n", - "- `Gumbel`: 耿贝尔极值分布。参数为位置参数和规模参数。\n", - "\n", - "- `Logistic`:逻辑斯谛分布。参数为位置参数和规模参数。\n", - "\n", - "- `Cauchy`:柯西分布。参数为位置参数和规模参数。\n", - "\n", - "#### Distribution基类\n", - "\n", - "`Distribution`是所有概率分布的基类。\n", - "\n", - "接口介绍:`Distribution`类支持的函数包括`prob`、`log_prob`、`cdf`、`log_cdf`、`survival_function`、`log_survival`、`mean`、`sd`、`var`、`entropy`、`kl_loss`、`cross_entropy`和`sample`。分布不同,所需传入的参数也不同。只有在派生类中才能使用,由派生类的函数实现决定参数。\n", - "\n", - "- `prob`:概率密度函数(PDF)/ 概率质量函数(PMF)。\n", - "\n", - "- `log_prob`:对数似然函数。\n", - "\n", - "- `cdf`:累积分布函数(CDF)。\n", - "\n", - "- `log_cdf`:对数累积分布函数。\n", - "\n", - "- `survival_function`:生存函数。\n", - "\n", - "- `log_survival`:对数生存函数。\n", - "\n", - "- `mean`:均值。\n", - "\n", - "- `sd`:标准差。\n", - "\n", - "- `var`:方差。\n", - "\n", - "- `entropy`:熵。\n", - "\n", - "- `kl_loss`:Kullback-Leibler 散度。\n", - "\n", - "- `cross_entropy`:两个概率分布的交叉熵。\n", - "\n", - "- `sample`:概率分布的随机抽样。\n", - "\n", - "- `get_dist_args`:概率分布在网络中使用的参数。\n", - "\n", - "- `get_dist_type`:概率分布的类型。\n", - "\n", - "#### 伯努利分布(Bernoulli)\n", - "\n", - "伯努利分布,继承自`Distribution`类。\n", - "\n", - "属性:\n", - "\n", - "- `Bernoulli.probs`:返回伯努利试验成功的概率,类型为`Tensor`。\n", - "\n", - "`Distribution`基类调用`Bernoulli`中私有接口以实现基类中的公有接口。`Bernoulli`支持的公有接口为:\n", - "\n", - "- `mean`,`mode`,`var`,`sd`:可选择传入试验成功的概率`probs1`。\n", - "\n", - "- `entropy`:可选择传入试验成功的概率`probs1`。\n", - "\n", - "- `cross_entropy`,`kl_loss`:必须传入`dist`和`probs1_b`。`dist`为另一分布的类型,目前只支持此处为“Bernoulli”。`probs1_b`为分布`b`的试验成功概率。可选择传入分布`a`的参数`probs1_a`。\n", - "\n", - "- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入`value`。可选择传入试验成功的概率`probs`。\n", - "\n", - "- `sample`:可选择传入样本形状`shape`和试验成功的概率`probs1`。\n", - "\n", - "- `get_dist_args`:可选择传入试验成功的概率`probs`。返回值为`(probs,)`,类型为tuple。\n", - "\n", - "- `get_dist_type`:返回“Bernoulli”。\n", - "\n", - "#### 指数分布(Exponential)\n", - "\n", - "指数分布,继承自`Distribution`类。\n", - "\n", - "属性:\n", - "\n", - "- `Exponential.rate`:返回分布的率参数,类型为`Tensor`。\n", - "\n", - "`Distribution`基类调用`Exponential`私有接口以实现基类中的公有接口。`Exponential`支持的公有接口为:\n", - "\n", - "- `mean`,`mode`,`var`,`sd`:可选择传入率参数`rate`。\n", - "\n", - "- `entropy`:可选择传入率参数`rate`。\n", - "\n", - "- `cross_entropy`,`kl_loss`:必须传入`dist`和`rate_b`。`dist`为另一分布的类型的名称, 目前只支持此处为“Exponential”。`rate_b`为分布`b`的率参数。可选择传入分布`a`的参数`rate_a`。\n", - "\n", - "- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入`value`。可选择传入率参数`rate`。\n", - "\n", - "- `sample`:可选择传入样本形状`shape`和率参数`rate`。返回值为`(rate,)`,类型为tuple。\n", - "\n", - "- `get_dist_args`:可选择传入率参数`rate`。返回值为`(rate,)`,类型为tuple。\n", - "\n", - "- `get_dist_type`:返回“Exponential”。\n", - "\n", - "#### 几何分布(Geometric)\n", - "\n", - "几何分布,继承自`Distribution`类。\n", - "\n", - "属性:\n", - "\n", - "- `Geometric.probs`:返回伯努利试验成功的概率,类型为`Tensor`。\n", - "\n", - "`Distribution`基类调用`Geometric`中私有接口以实现基类中的公有接口。`Geometric`支持的公有接口为:\n", - "\n", - "- `mean`,`mode`,`var`,`sd`:可选择传入试验成功的概率`probs1`。\n", - "\n", - "- `entropy`:可选择传入 试验成功的概率`probs1`。\n", - "\n", - "- `cross_entropy`,`kl_loss`:必须传入`dist`和`probs1_b`。`dist`为另一分布的类型的名称,目前只支持此处为“Geometric”。`probs1_b`为分布`b`的试验成功概率。可选择传入分布`a`的参数`probs1_a`。\n", - "\n", - "- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入`value`。可选择传入试验成功的概率`probs1`。\n", - "\n", - "- `sample`:可选择传入样本形状`shape`和试验成功的概率`probs1`。\n", - "\n", - "- `get_dist_args`:可选择传入试验成功的概率`probs1`。返回值为`(probs1,)`,类型为tuple。\n", - "\n", - "- `get_dist_type`:返回“Geometric”。\n", - "\n", - "#### 正态分布(Normal)\n", - "\n", - "正态(高斯)分布,继承自`Distribution`类。\n", - "\n", - "`Distribution`基类调用`Normal`中私有接口以实现基类中的公有接口。`Normal`支持的公有接口为:\n", - "\n", - "- `mean`,`mode`,`var`,`sd`:可选择传入分布的参数均值`mean`和标准差`sd`。\n", - "\n", - "- `entropy`:可选择传入分布的参数均值`mean`和标准差`sd`。\n", - "\n", - "- `cross_entropy`,`kl_loss`:必须传入`dist`,`mean_b`和`sd_b`。`dist`为另一分布的类型的名称,目前只支持此处为“Normal”。\n", - "\n", - "`mean_b`和`sd_b`为分布`b`的均值和标准差。可选择传入分布的参数`a`均值`mean_a`和标准差`sd_a`。\n", - "\n", - "- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入`value`。可选择分布的参数包括均值`mean_a`和标准差`sd_a`。\n", - "\n", - "- `sample`:可选择传入样本形状`shape`和分布的参数包括均值`mean_a`和标准差`sd_a`。\n", - "\n", - "- `get_dist_args`:可选择传入分布的参数均值`mean`和标准差`sd`。返回值为`(mean, sd)`,类型为tuple。\n", - "\n", - "- `get_dist_type`:返回“Normal”。\n", - "\n", - "#### 均匀分布(Uniform)\n", - "\n", - "均匀分布,继承自`Distribution`类。\n", - "\n", - "属性:\n", - "\n", - "- `Uniform.low`:返回分布的最小值,类型为`Tensor`。\n", - "\n", - "- `Uniform.high`:返回分布的最大值,类型为`Tensor`。\n", - "\n", - "`Distribution`基类调用`Uniform`以实现基类中的公有接口。`Uniform`支持的公有接口为:\n", - "\n", - "- `mean`,`mode`,`var`,`sd`:可选择传入分布的参数最大值`high`和最小值`low`。\n", - "\n", - "- `entropy`:可选择传入分布的参数最大值`high`和最小值`low`。\n", - "\n", - "- `cross_entropy`,`kl_loss`:必须传入`dist`,`high_b`和`low_b`。`dist`为另一分布的类型的名称,目前只支持此处为“Uniform”。`high_b`和`low_b`为分布`b`的参数。可选择传入分布`a`的参数即最大值`high_a`和最小值`low_a`。\n", - "\n", - "- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入`value`。可选择传入分布的参数最大值`high`和最小值`low`。\n", - "\n", - "- `sample`:可选择传入`shape`和分布的参数即最大值`high`和最小值`low`。\n", - "\n", - "- `get_dist_args`:可选择传入分布的参数最大值`high`和最小值`low`。返回值为`(low, high)`,类型为tuple。\n", - "\n", - "- `get_dist_type`:返回“Uniform”。\n", - "\n", - "#### 多类别分布(Categorical)\n", - "\n", - "多类别分布,继承自`Distribution`类。\n", - "\n", - "属性:\n", - "\n", - "- `Categorical.probs`:返回各种类别的概率,类型为`Tensor`。\n", - "\n", - "`Distribution`基类调用`Categorical`以实现基类中的公有接口。`Categorical`支持的公有接口为:\n", - "\n", - "- `mean`,`mode`,`var`,`sd`:可选择传入分布的参数类别概率`probs`。\n", - "\n", - "- `entropy`:可选择传入分布的参数类别概率`probs`。\n", - "\n", - "- `cross_entropy`,`kl_loss`:必须传入`dist`,`probs_b`。`dist`为另一分布的类型的名称,目前只支持此处为“Categorical”。`probs_b`为分布`b`的参数。可选择传入分布`a`的参数即`probs_a`。\n", - "\n", - "- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入`value`。可选择传入分布的参数类别概率`probs`。\n", - "\n", - "- `sample`:可选择传入`shape`和类别概率`probs`。\n", - "\n", - "- `get_dist_args`:可选择传入分布的参数类别概率`probs`。返回值为`(probs,)`,类型为tuple。\n", - "\n", - "- `get_dist_type`:返回“Categorical”。\n", - "\n", - "#### 对数正态分布(LogNormal)\n", - "\n", - "对数正态分布,继承自`TransformedDistribution`类,由`Exp`Bijector 和`Normal`Distribution 构成。\n", - "\n", - "属性:\n", - "\n", - "- `LogNormal.loc`:返回分布的位置参数,类型为`Tensor`。\n", - "\n", - "- `LogNormal.scale`:返回分布的规模参数,类型为`Tensor`。\n", - "\n", - "`Distribution`基类调用`LogNormal`及`TransformedDistribution`中私有接口以实现基类中的公有接口。`LogNormal`支持的公有接口为:\n", - "\n", - "- `mean`,`mode`,`var`,`sd`:可选择传入分布的位置参数`loc`和规模参数`scale`。\n", - "\n", - "- `entropy`:可选择传入分布的位置参数`loc`和规模参数`scale`。\n", - "\n", - "- `cross_entropy`,`kl_loss`:必须传入`dist`,`loc_b`和`scale_b`。`dist`为另一分布的类型的名称,目前只支持此处为“LogNormal”。`loc_b`和`scale_b`为分布`b`的均值和标准差。可选择传入分布的参数`a`均值`loc_a`和标准差`sclae_a`。\n", - "\n", - "- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入`value`。可选择分布的参数包括均值`loc_a`和标准差`scale_a`。`Distribution`基类调用`TransformedDistribution`私有接口。\n", - "\n", - "- `sample`:可选择传入样本形状`shape`和分布的参数包括均值`loc_a`和标准差`scale_a`。`Distribution`基类调用`TransformedDistribution`私有接口。\n", - "\n", - "- `get_dist_args`:可选择传入分布的位置参数`loc`和规模参数`scale`。返回值为`(loc, scale)`,类型为tuple。\n", - "\n", - "- `get_dist_type`:返回“LogNormal”。\n", - "\n", - "#### 柯西分布(Cauchy)\n", - "\n", - "柯西分布,继承自`Distribution`类。\n", - "\n", - "属性:\n", - "\n", - "- `Cauchy.loc`:返回分布的位置参数,类型为`Tensor`。\n", - "\n", - "- `Cauchy.scale`:返回分布的规模参数,类型为`Tensor`。\n", - "\n", - "`Distribution`基类调用`Cauchy`中私有接口以实现基类中的公有接口。`Cauchy`支持的公有接口为:\n", - "\n", - "- `entropy`:可选择传入分布的位置参数`loc`和规模参数`scale`。\n", - "\n", - "- `cross_entropy`,`kl_loss`:必须传入`dist`,`loc_b`和`scale_b`。`dist`为另一分布的类型的名称,目前只支持此处为“Cauchy”。`loc_b`和`scale_b`为分布`b`的位置参数和规模参数。可选择传入分布的参数`a`位置`loc_a`和规模`scale_a`。\n", - "\n", - "- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入`value`。可选择传入分布的位置参数`loc`和规模参数`scale`。\n", - "\n", - "- `sample`:可选择传入样本形状`shape`和分布的参数包括分布的位置参数`loc`和规模参数`scale`。\n", - "\n", - "- `get_dist_args`:可选择传入分布的位置参数`loc`和规模参数`scale`。返回值为`(loc, scale)`,类型为tuple。\n", - "\n", - "- `get_dist_type`:返回“Cauchy”。\n", - "\n", - "#### 耿贝尔极值分布(Gumbel)\n", - "\n", - "耿贝尔极值分布,继承自`TransformedDistribution`类,由`GumbelCDF`Bijector和`Uniform`Distribution 构成。\n", - "\n", - "属性:\n", - "\n", - "- `Gumbel.loc`:返回分布的位置参数,类型为`Tensor`。\n", - "\n", - "- `Gumbel.scale`:返回分布的规模参数,类型为`Tensor`。\n", - "\n", - "`Distribution`基类调用`Gumbel`中私有接口以实现基类中的公有接口。`Gumbel`支持的公有接口为:\n", - "\n", - "- `mean`,`mode`,`var`,`sd`:无参数。\n", - "\n", - "- `entropy`:无参数。\n", - "\n", - "- `cross_entropy`,`kl_loss`:必须传入`dist`,`loc_b`和`scale_b`。`dist`为另一分布的类型的名称,目前只支持此处为“Gumbel”。`loc_b`和`scale_b`为分布`b`的位置参数和规模参数。\n", - "\n", - "- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入`value`。\n", - "\n", - "- `sample`:可选择传入样本形状`shape`。\n", - "\n", - "- `get_dist_args`:可选择传入分布的位置参数`loc`和规模参数`scale`。返回值为`(loc, scale)`,类型为tuple。\n", - "\n", - "- `get_dist_type`:返回“Gumbel”。\n", - "\n", - "#### 逻辑斯谛分布(Logistic)\n", - "\n", - "逻辑斯谛分布,继承自`Distribution`类。\n", - "\n", - "属性:\n", - "\n", - "- `Logistic.loc`:返回分布的位置参数,类型为`Tensor`。\n", - "\n", - "- `Logistic.scale`:返回分布的规模参数,类型为`Tensor`。\n", - "\n", - "`Distribution`基类调用`logistic`中私有接口以实现基类中的公有接口。`Logistic`支持的公有接口为:\n", - "\n", - "- `mean`,`mode`,`var`,`sd`:可选择传入分布的位置参数`loc`和规模参数`scale`。\n", - "\n", - "- `entropy`:可选择传入分布的位置参数`loc`和规模参数`scale`。\n", - "\n", - "- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入`value`。可选择传入分布的位置参数`loc`和规模参数`scale`。\n", - "\n", - "- `sample`:可选择传入样本形状`shape`和分布的参数包括分布的位置参数`loc`和规模参数`scale`。\n", - "\n", - "- `get_dist_args`:可选择传入分布的位置参数`loc`和规模参数`scale`。返回值为`(loc, scale)`,类型为tuple。\n", - "\n", - "- `get_dist_type`:返回“Logistic”。\n", - "\n", - "#### 泊松分布\n", - "\n", - "泊松分布,继承自`Distribution`类。\n", - "\n", - "属性:\n", - "\n", - "- `Poisson.rate`:返回分布的率参数,类型为Tensor。\n", - "\n", - "`Distribution` 基类调用`Poisson`中私有接口以实现基类中的公有接口。`Poisson`支持的公有接口为:\n", - "\n", - "- `mean`,`mode`,`var`,`sd`:可选择传入分布的率参数 rate 。\n", - "\n", - "- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入`value`。可选择传入分布的率参数`rate`。\n", - "\n", - "- `sample`:可选择传入样本形状shape 和分布的率参数 rate 。\n", - "\n", - "- `get_dist_args`:可选择传入分布的率参数`rate`。返回值为`(rate,)`,类型为tuple。\n", - "\n", - "- `get_dist_type`:返回“Poisson”。\n", - "\n", - "#### 伽马分布(Gamma)\n", - "\n", - "伽马分布,继承自 `Distribution` 类。\n", - "\n", - "属性:\n", - "\n", - "- `Gamma.concentration`:返回分布的参数 `concentration` ,类型为`Tensor`。\n", - "\n", - "- `Gamma.rate`:返回分布的参数 `rate` ,类型为`Tensor`。\n", - "\n", - "`Distribution` 基类调用 `Gamma` 中私有接口以实现基类中的公有接口。`Gamma` 支持的公有接口为:\n", - "\n", - "- `mean`,`mode`,`sd`,`var`:可选择传入分布的参数`concentration`和参数`rate` 。\n", - "\n", - "- `entropy`:可选择传入分布的参数`concentration`和参数`rate`。\n", - "\n", - "- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入`value`。可选择传入分布的参数`concentration`和参数`rate`。\n", - "\n", - "- `cross_entropy`,`kl_loss`:必须传入`dist`,`concentration_b`和`rate_b`。`dist`为另一分布的类型的名称,目前只支持此处为“Gamma”。 `concentration_b`和`rate_b`为分布`b`的参数。可选择传入分布`a`的参数即`concentration_a`和`rate_a`。\n", - "\n", - "- `sample`:可选择传入样本形状`shape`和分布的参数包括分布的参数`concentration`和参数`rate`。\n", - "\n", - "- `get_dist_args`:可选择传入分布的参数`concentration`和参数`rate`。返回值为`(concentration, rate)`,类型为tuple。\n", - "\n", - "- `get_dist_type`:返回“Gamma”。\n", - "\n", - "#### 贝塔分布(Beta)\n", - "\n", - "贝塔分布,继承自 `Distribution` 类。\n", - "\n", - "属性:\n", - "\n", - "- `Beta.concentration1`:返回分布的参数 `concentration1` ,类型为`Tensor`。\n", - "\n", - "- `Beta.concentration0`:返回分布的参数 `concentration0` ,类型为`Tensor`。\n", - "\n", - "`Distribution` 基类调用 `Beta` 中私有接口以实现基类中的公有接口。`Beta` 支持的公有接口为:\n", - "\n", - "- `mean`,`mode`,`sd`,`var`:可选择传入分布的参数`concentration1`和参数`concentration0`。\n", - "\n", - "- `entropy`:可选择传入分布的参数`concentration1`和参数`concentration0`。\n", - "\n", - "- `prob`,`log_prob`:必须传入`value`。可选择传入分布的参数`concentration1`和参数`concentration0`。\n", - "\n", - "- `cross_entropy`,`kl_loss`:必须传入`dist`,`concentration1_b`和`concentration1_b`。`dist`为另一分布的类型的名称,目前只支持此处为“Beta”。`concentration1_b`和`concentration1_b`为分布`b`的参数。可选择传入分布`a`的参数即`concentration1_a`和`concentration0_a`。\n", - "\n", - "- `sample`:可选择传入样本形状`shape`和分布的参数包括分布的位置参数`loc`和规模参数`scale`。\n", - "\n", - "- `get_dist_args`:可选择传入分布的参数`concentration1`和参数`concentration0`。返回值为`(concentration1, concentration0)`,类型为tuple。\n", - "\n", - "- `get_dist_type`:返回“Beta”。" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 概率分布类在PyNative模式下的应用\n", - "\n", - "`Distribution`子类可在PyNative模式下使用。\n", - "\n", - "以`Normal`为例,创建一个均值为0.0、标准差为1.0的正态分布,然后计算相关函数。" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "mean: 0.0\n", - "var: 1.0\n", - "entropy: 1.4189385\n", - "prob: [0.35206532 0.3989423 0.35206532]\n", - "cdf: [0.30853754 0.5 0.69146246]\n", - "kl: 0.44314718\n", - "dist_arg: (Tensor(shape=[], dtype=Float32, value= 0), Tensor(shape=[], dtype=Float32, value= 1))\n" - ] - } - ], - "source": [ - "import mindspore as ms\n", - "import mindspore.nn.probability.distribution as msd\n", - "\n", - "ms.set_context(mode=ms.PYNATIVE_MODE, device_target=\"GPU\")\n", - "\n", - "my_normal = msd.Normal(0.0, 1.0, dtype=ms.float32)\n", - "\n", - "mean = my_normal.mean()\n", - "var = my_normal.var()\n", - "entropy = my_normal.entropy()\n", - "\n", - "value = ms.Tensor([-0.5, 0.0, 0.5], dtype=ms.float32)\n", - "prob = my_normal.prob(value)\n", - "cdf = my_normal.cdf(value)\n", - "\n", - "mean_b = ms.Tensor(1.0, dtype=ms.float32)\n", - "sd_b = ms.Tensor(2.0, dtype=ms.float32)\n", - "kl = my_normal.kl_loss('Normal', mean_b, sd_b)\n", - "\n", - "# get the distribution args as a tuple\n", - "dist_arg = my_normal.get_dist_args()\n", - "\n", - "print(\"mean: \", mean)\n", - "print(\"var: \", var)\n", - "print(\"entropy: \", entropy)\n", - "print(\"prob: \", prob)\n", - "print(\"cdf: \", cdf)\n", - "print(\"kl: \", kl)\n", - "print(\"dist_arg: \", dist_arg)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 概率分布类在图模式下的应用\n", - "\n", - "在图模式下,`Distribution`子类可用在网络中。" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "pdf: [0.35206532 0.3989423 0.35206532]\n", - "kl: 0.5\n" - ] - } - ], - "source": [ - "import mindspore.nn as nn\n", - "import mindspore as ms\n", - "import mindspore.nn.probability.distribution as msd\n", - "ms.set_context(mode=ms.GRAPH_MODE)\n", - "\n", - "class Net(nn.Cell):\n", - " def __init__(self):\n", - " super(Net, self).__init__()\n", - " self.normal = msd.Normal(0.0, 1.0, dtype=ms.float32)\n", - "\n", - " def construct(self, value, mean, sd):\n", - " pdf = self.normal.prob(value)\n", - " kl = self.normal.kl_loss(\"Normal\", mean, sd)\n", - " return pdf, kl\n", - "\n", - "net = Net()\n", - "value = ms.Tensor([-0.5, 0.0, 0.5], dtype=ms.float32)\n", - "mean = ms.Tensor(1.0, dtype=ms.float32)\n", - "sd = ms.Tensor(1.0, dtype=ms.float32)\n", - "pdf, kl = net(value, mean, sd)\n", - "print(\"pdf: \", pdf)\n", - "print(\"kl: \", kl)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### TransformedDistribution类接口设计\n", - "\n", - "`TransformedDistribution`继承自`Distribution`,是可通过映射f(x)变化得到的数学分布的基类。其接口包括:\n", - "\n", - "1. 属性\n", - "\n", - " - `bijector`:返回分布的变换方法。\n", - "\n", - " - `distribution`:返回原始分布。\n", - "\n", - " - `is_linear_transformation`:返回线性变换标志。\n", - "\n", - "2. 接口函数(以下接口函数的参数与构造函数中`distribution`的对应接口的参数相同)。\n", - "\n", - " - `cdf`:累积分布函数(CDF)。\n", - "\n", - " - `log_cdf`:对数累积分布函数。\n", - "\n", - " - `survival_function`:生存函数。\n", - "\n", - " - `log_survival`:对数生存函数。\n", - "\n", - " - `prob`:概率密度函数(PDF)/ 概率质量函数(PMF)。\n", - "\n", - " - `log_prob`:对数似然函数。\n", - "\n", - " - `sample`:随机取样。\n", - "\n", - " - `mean`:无参数。只有当`Bijector.is_constant_jacobian=true`时可调用。" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### PyNative模式下调用TransformedDistribution实例\n", - "\n", - "`TransformedDistribution`子类可在PyNative模式下使用。\n", - "\n", - "这里构造一个`TransformedDistribution`实例,使用`Normal`分布作为需要变换的分布类,使用`Exp`作为映射变换,可以生成`LogNormal`分布。" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "TransformedDistribution<\n", - " (_bijector): Exp\n", - " (_distribution): Normal\n", - " >\n", - "underlying distribution:\n", - " Normal\n", - "bijector:\n", - " Exp\n", - "cdf:\n", - " [0.7558914 0.9462397 0.9893489]\n", - "sample:\n", - " (3, 2)\n" - ] - } - ], - "source": [ - "import numpy as np\n", - "import mindspore.nn as nn\n", - "import mindspore.nn.probability.bijector as msb\n", - "import mindspore.nn.probability.distribution as msd\n", - "import mindspore as ms\n", - "\n", - "ms.set_context(mode=ms.PYNATIVE_MODE)\n", - "\n", - "normal = msd.Normal(0.0, 1.0, dtype=ms.float32)\n", - "exp = msb.Exp()\n", - "LogNormal = msd.TransformedDistribution(exp, normal, seed=0, name=\"LogNormal\")\n", - "\n", - "# compute cumulative distribution function\n", - "x = np.array([2.0, 5.0, 10.0], dtype=np.float32)\n", - "tx = ms.Tensor(x, dtype=ms.float32)\n", - "cdf = LogNormal.cdf(tx)\n", - "\n", - "# generate samples from the distribution\n", - "shape = ((3, 2))\n", - "sample = LogNormal.sample(shape)\n", - "\n", - "# get information of the distribution\n", - "print(LogNormal)\n", - "# get information of the underlying distribution and the bijector separately\n", - "print(\"underlying distribution:\\n\", LogNormal.distribution)\n", - "print(\"bijector:\\n\", LogNormal.bijector)\n", - "# get the computation results\n", - "print(\"cdf:\\n\", cdf)\n", - "print(\"sample:\\n\", sample.shape)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "当构造`TransformedDistribution`映射变换的`is_constant_jacobian = true`时(如`ScalarAffine`),构造的`TransformedDistribution`实例可以使用直接使用`mean`接口计算均值,例如:" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "2.0\n" - ] - } - ], - "source": [ - "normal = msd.Normal(0.0, 1.0, dtype=ms.float32)\n", - "scalaraffine = msb.ScalarAffine(1.0, 2.0)\n", - "trans_dist = msd.TransformedDistribution(scalaraffine, normal, seed=0)\n", - "mean = trans_dist.mean()\n", - "print(mean)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 图模式下调用TransformedDistribution实例\n", - "\n", - "在图模式下,`TransformedDistribution`类可用在网络中。" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "cdf: [0.7558914 0.86403143 0.9171715 0.9462397 ]\n", - "sample: (2, 3)\n" - ] - } - ], - "source": [ - "import numpy as np\n", - "import mindspore.nn as nn\n", - "import mindspore as ms\n", - "import mindspore.nn.probability.bijector as msb\n", - "import mindspore.nn.probability.distribution as msd\n", - "ms.set_context(mode=ms.GRAPH_MODE)\n", - "\n", - "class Net(nn.Cell):\n", - " def __init__(self, shape, dtype=ms.float32, seed=0, name='transformed_distribution'):\n", - " super(Net, self).__init__()\n", - " # create TransformedDistribution distribution\n", - " self.exp = msb.Exp()\n", - " self.normal = msd.Normal(0.0, 1.0, dtype=dtype)\n", - " self.lognormal = msd.TransformedDistribution(self.exp, self.normal, seed=seed, name=name)\n", - " self.shape = shape\n", - "\n", - " def construct(self, value):\n", - " cdf = self.lognormal.cdf(value)\n", - " sample = self.lognormal.sample(self.shape)\n", - " return cdf, sample\n", - "\n", - "shape = (2, 3)\n", - "net = Net(shape=shape, name=\"LogNormal\")\n", - "x = np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32)\n", - "tx = ms.Tensor(x, dtype=ms.float32)\n", - "cdf, sample = net(tx)\n", - "print(\"cdf: \", cdf)\n", - "print(\"sample: \", sample.shape)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 概率分布映射\n", - "\n", - "Bijector(`mindspore.nn.probability.bijector`)是概率编程的基本组成部分。Bijector描述了一种随机变量的变换方法,可以通过一个已有的随机变量X和一个映射函数f生成一个新的随机变量$Y = f(x)$。\n", - "\n", - "`Bijector`提供了映射相关的四种变换方法。它可以当做算子直接使用,也可以作用在某个随机变量`Distribution`类实例上生成新的随机变量的`Distribution`类实例。\n", - "\n", - "### Bijector类接口设计\n", - "\n", - "#### Bijector基类\n", - "\n", - "`Bijector`类是所有概率分布映射的基类。其接口包括:\n", - "\n", - "1. 属性\n", - "\n", - " - `name`:返回`name`的值。\n", - "\n", - " - `is_dtype`:返回`dtype`的值。\n", - "\n", - " - `parameter`:返回`parameter`的值。\n", - "\n", - " - `is_constant_jacobian`:返回`is_constant_jacobian`的值。\n", - "\n", - " - `is_injective`:返回`is_injective`的值。\n", - "\n", - "2. 映射函数\n", - "\n", - " - `forward`:正向映射,创建派生类后由派生类的`_forward`决定参数。\n", - "\n", - " - `inverse`:反向映射,创建派生类后由派生类的`_inverse`决定参数。\n", - "\n", - " - `forward_log_jacobian`:正向映射的导数的对数,创建派生类后由派生类的`_forward_log_jacobian`决定参数。\n", - "\n", - " - `inverse_log_jacobian`:反向映射的导数的对数,创建派生类后由派生类的`_inverse_log_jacobian`决定参数。\n", - "\n", - "`Bijector`作为函数调用:输入是一个`Distribution`类:生成一个`TransformedDistribution` *(不可在图内调用)*。\n", - "\n", - "#### 幂函数变换映射(PowerTransform)\n", - "\n", - "`PowerTransform`做如下变量替换:$Y = g(X) = {(1 + X \\times power)}^{1 / power}$。其接口包括:\n", - "\n", - "1. 属性\n", - "\n", - " - `power`:返回`power`的值,类型为`Tensor`。\n", - "\n", - "2. 映射函数\n", - "\n", - " - `forward`:正向映射,输入为`Tensor`。\n", - "\n", - " - `inverse`:反向映射,输入为`Tensor`。\n", - "\n", - " - `forward_log_jacobian`:正向映射的导数的对数,输入为`Tensor`。\n", - "\n", - " - `inverse_log_jacobian`:反向映射的导数的对数,输入为`Tensor`。\n", - "\n", - "#### 指数变换映射(Exp)\n", - "\n", - "`Exp`做如下变量替换:$Y = g(X)= exp(X)$。其接口包括:\n", - "\n", - "映射函数\n", - "\n", - "- `forward`:正向映射,输入为`Tensor`。\n", - "\n", - "- `inverse`:反向映射,输入为`Tensor`。\n", - "\n", - "- `forward_log_jacobian`:正向映射的导数的对数,输入为`Tensor`。\n", - "\n", - "- `inverse_log_jacobian`:反向映射的导数的对数,输入为`Tensor`。\n", - "\n", - "#### 标量仿射变换映射(ScalarAffine)\n", - "\n", - "`ScalarAffine`做如下变量替换:$Y = g(X) = scale\\times X + shift$。其接口包括:\n", - "\n", - "1. 属性\n", - "\n", - " - `scale`:返回`scale`的值,类型为`Tensor`。\n", - "\n", - " - `shift`:返回`shift`的值,类型为`Tensor`。\n", - "\n", - "2. 映射函数\n", - "\n", - " - `forward`:正向映射,输入为`Tensor`。\n", - "\n", - " - `inverse`:反向映射,输入为`Tensor`。\n", - "\n", - " - `forward_log_jacobian`:正向映射的导数的对数,输入为`Tensor`。\n", - "\n", - " - `inverse_log_jacobian`:反向映射的导数的对数,输入为`Tensor`。\n", - "\n", - "#### Softplus变换映射(Softplus)\n", - "\n", - "`Softplus`做如下变量替换:$Y = g(X) = \\frac{log(1 + e ^ {sharpness \\times X}\\ \\ \\ \\ \\ \\ )} {sharpness}$。其接口包括:\n", - "\n", - "1. 属性\n", - "\n", - " - `sharpness`:返回`sharpness`的值,类型为`Tensor`。\n", - "\n", - "2. 映射函数\n", - "\n", - " - `forward`:正向映射,输入为`Tensor`。\n", - "\n", - " - `inverse`:反向映射,输入为`Tensor`。\n", - "\n", - " - `forward_log_jacobian`:正向映射的导数的对数,输入为`Tensor`。\n", - "\n", - " - `inverse_log_jacobian`:反向映射的导数的对数,输入为`Tensor`。\n", - "\n", - "#### 耿贝尔累计密度函数映射(GumbelCDF)\n", - "\n", - "`GumbelCDF`做如下变量替换:$Y = g(X) = \\exp(-\\exp(-\\frac{X - loc}{scale}))$。其接口包括:\n", - "\n", - "1. 属性\n", - "\n", - " - `loc`:返回`loc`的值,类型为`Tensor`。\n", - "\n", - " - `scale`:返回`scale`的值,类型为`Tensor`。\n", - "\n", - "2. 映射函数\n", - "\n", - " - `forward`:正向映射,输入为`Tensor`。\n", - "\n", - " - `inverse`:反向映射,输入为`Tensor`。\n", - "\n", - " - `forward_log_jacobian`:正向映射的导数的对数,输入为`Tensor`。\n", - "\n", - " - `inverse_log_jacobian`:反向映射的导数的对数,输入为`Tensor`。\n", - "\n", - "#### 逆映射(Invert)\n", - "\n", - "`Invert`对一个映射做逆变换,其接口包括:\n", - "\n", - "1. 属性\n", - "\n", - " - `bijector`:返回初始化时使用的`Bijector`,类型为`Bijector`。\n", - "\n", - "2. 映射函数\n", - "\n", - " - `forward`:正向映射,输入为`Tensor`。\n", - "\n", - " - `inverse`:反向映射,输入为`Tensor`。\n", - "\n", - " - `forward_log_jacobian`:正向映射的导数的对数,输入为`Tensor`。\n", - "\n", - " - `inverse_log_jacobian`:反向映射的导数的对数,输入为`Tensor`。" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### PyNative模式下调用Bijector实例\n", - "\n", - "在执行之前,我们需要导入需要的库文件包。双射类最主要的库是`mindspore.nn.probability.bijector`,导入后我们使用`msb`作为库的缩写并进行调用。\n", - "\n", - "下面我们以`PowerTransform`为例。创建一个指数为2的`PowerTransform`对象。" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "PowerTransform\n", - "forward: [2.236068 2.6457515 3. 3.3166249]\n", - "inverse: [ 1.5 4. 7.5 12.000001]\n", - "forward_log_jacobian: [-0.804719 -0.9729551 -1.0986123 -1.1989477]\n", - "inverse_log_jacobian: [0.6931472 1.0986123 1.3862944 1.609438 ]\n" - ] - } - ], - "source": [ - "import numpy as np\n", - "import mindspore.nn as nn\n", - "import mindspore.nn.probability.bijector as msb\n", - "import mindspore as ms\n", - "\n", - "ms.set_context(mode=ms.PYNATIVE_MODE)\n", - "\n", - "powertransform = msb.PowerTransform(power=2.)\n", - "\n", - "x = np.array([2.0, 3.0, 4.0, 5.0], dtype=np.float32)\n", - "tx = ms.Tensor(x, dtype=ms.float32)\n", - "forward = powertransform.forward(tx)\n", - "inverse = powertransform.inverse(tx)\n", - "forward_log_jaco = powertransform.forward_log_jacobian(tx)\n", - "inverse_log_jaco = powertransform.inverse_log_jacobian(tx)\n", - "\n", - "print(powertransform)\n", - "print(\"forward: \", forward)\n", - "print(\"inverse: \", inverse)\n", - "print(\"forward_log_jacobian: \", forward_log_jaco)\n", - "print(\"inverse_log_jacobian: \", inverse_log_jaco)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 图模式下调用Bijector实例\n", - "\n", - "在图模式下,`Bijector`子类可用在网络中。" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "forward: [2.236068 2.6457515 3. 3.3166249]\n", - "inverse: [ 1.5 4. 7.5 12.000001]\n", - "forward_log_jacobian: [-0.804719 -0.9729551 -1.0986123 -1.1989477]\n", - "inverse_log_jacobian: [0.6931472 1.0986123 1.3862944 1.609438 ]\n" - ] - } - ], - "source": [ - "import numpy as np\n", - "import mindspore.nn as nn\n", - "import mindspore as ms\n", - "import mindspore.nn.probability.bijector as msb\n", - "ms.set_context(mode=ms.GRAPH_MODE)\n", - "\n", - "class Net(nn.Cell):\n", - " def __init__(self):\n", - " super(Net, self).__init__()\n", - " # create a PowerTransform bijector\n", - " self.powertransform = msb.PowerTransform(power=2.)\n", - "\n", - " def construct(self, value):\n", - " forward = self.powertransform.forward(value)\n", - " inverse = self.powertransform.inverse(value)\n", - " forward_log_jaco = self.powertransform.forward_log_jacobian(value)\n", - " inverse_log_jaco = self.powertransform.inverse_log_jacobian(value)\n", - " return forward, inverse, forward_log_jaco, inverse_log_jaco\n", - "\n", - "net = Net()\n", - "x = np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32)\n", - "tx = ms.Tensor(x, dtype=ms.float32)\n", - "forward, inverse, forward_log_jaco, inverse_log_jaco = net(tx)\n", - "print(\"forward: \", forward)\n", - "print(\"inverse: \", inverse)\n", - "print(\"forward_log_jacobian: \", forward_log_jaco)\n", - "print(\"inverse_log_jacobian: \", inverse_log_jaco)" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "MindSpore", - "language": "python", - "name": "mindspore" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.6" - } - }, - "nbformat": 4, - "nbformat_minor": 4 -} diff --git a/docs/probability/docs/source_zh_cn/probability_cn.png b/docs/probability/docs/source_zh_cn/probability_cn.png deleted file mode 100644 index 826c0fea308a8ebf91efdd45f74ebae02ea734a4..0000000000000000000000000000000000000000 Binary files a/docs/probability/docs/source_zh_cn/probability_cn.png and /dev/null differ diff --git a/docs/probability/docs/source_zh_cn/using_bnn.ipynb b/docs/probability/docs/source_zh_cn/using_bnn.ipynb deleted file mode 100644 index bedf26d98b6df8b74ce788cc570b3331ff5511a9..0000000000000000000000000000000000000000 --- a/docs/probability/docs/source_zh_cn/using_bnn.ipynb +++ /dev/null @@ -1,408 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# 使用贝叶斯神经网络实现图片分类应用\n", - "\n", - "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/probability/zh_cn/mindspore_using_bnn.ipynb) [![下载样例代码](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_download_code.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/probability/zh_cn/mindspore_using_bnn.py) [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/probability/docs/source_zh_cn/using_bnn.ipynb)\n", - "\n", - "深度学习模型具有强大的拟合能力,而贝叶斯理论具有很好的可解释能力。MindSpore深度概率编程(MindSpore Probability)将深度学习和贝叶斯学习结合,通过设置网络权重为分布、引入隐空间分布等,可以对分布进行采样前向传播,由此引入了不确定性,从而增强了模型的鲁棒性和可解释性。\n", - "\n", - "本章将详细介绍深度概率编程中的贝叶斯神经网络在MindSpore上的应用。在动手进行实践之前,确保你已经正确安装了MindSpore 0.7.0-beta及其以上版本。\n", - "\n", - "> 本例面向GPU或Atlas训练系列产品平台,你可以在这里下载完整的样例代码:。\n", - ">\n", - "> 贝叶斯神经网络目前只支持图模式,需要在代码中设置`set_context(mode=GRAPH_MODE)`。" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 使用贝叶斯神经网络\n", - "\n", - "贝叶斯神经网络是由概率模型和神经网络组成的基本模型,它的权重不再是一个确定的值,而是一个分布。本例介绍了如何使用MDP中的`bnn_layers`模块实现贝叶斯神经网络,并利用贝叶斯神经网络实现一个简单的图片分类功能,整体流程如下:\n", - "\n", - "1. 处理MNIST数据集;\n", - "\n", - "2. 定义贝叶斯LeNet网络;\n", - "\n", - "3. 定义损失函数和优化器;\n", - "\n", - "4. 加载数据集并进行训练。" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 环境准备\n", - "\n", - "设置训练模式为图模式,计算平台为GPU。" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [], - "source": [ - "import mindspore as ms\n", - "\n", - "ms.set_context(mode=ms.GRAPH_MODE, save_graphs=False, device_target=\"GPU\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 数据准备\n", - "\n", - "### 下载数据集\n", - "\n", - "以下示例代码将MNIST数据集下载并解压到指定位置。" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import os\n", - "import requests\n", - "\n", - "requests.packages.urllib3.disable_warnings()\n", - "\n", - "def download_dataset(dataset_url, path):\n", - " filename = dataset_url.split(\"/\")[-1]\n", - " save_path = os.path.join(path, filename)\n", - " if os.path.exists(save_path):\n", - " return\n", - " if not os.path.exists(path):\n", - " os.makedirs(path)\n", - " res = requests.get(dataset_url, stream=True, verify=False)\n", - " with open(save_path, \"wb\") as f:\n", - " for chunk in res.iter_content(chunk_size=512):\n", - " if chunk:\n", - " f.write(chunk)\n", - " print(\"The {} file is downloaded and saved in the path {} after processing\".format(os.path.basename(dataset_url), path))\n", - "\n", - "train_path = \"datasets/MNIST_Data/train\"\n", - "test_path = \"datasets/MNIST_Data/test\"\n", - "\n", - "download_dataset(\"https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-labels-idx1-ubyte\", train_path)\n", - "download_dataset(\"https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-images-idx3-ubyte\", train_path)\n", - "download_dataset(\"https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-labels-idx1-ubyte\", test_path)\n", - "download_dataset(\"https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-images-idx3-ubyte\", test_path)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "下载的数据集文件的目录结构如下:\n", - "\n", - "```text\n", - "./datasets/MNIST_Data\n", - "├── test\n", - "│ ├── t10k-images-idx3-ubyte\n", - "│ └── t10k-labels-idx1-ubyte\n", - "└── train\n", - " ├── train-images-idx3-ubyte\n", - " └── train-labels-idx1-ubyte\n", - "```" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 定义数据集增强方法\n", - "\n", - "MNIST数据集的原始训练数据集是60000张$28\\times28$像素的单通道数字图片,本次训练用到的含贝叶斯层的LeNet5网络接收到训练数据的张量为`(32,1,32,32)`,通过自定义create_dataset函数将原始数据集增强为适应训练要求的数据,具体的增强操作解释可参考[初学入门](https://www.mindspore.cn/tutorials/zh-CN/master/beginner/quick_start.html)。" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "metadata": {}, - "outputs": [], - "source": [ - "import mindspore.dataset.vision as vision\n", - "import mindspore.dataset.transforms as transforms\n", - "from mindspore.dataset.vision import Inter\n", - "from mindspore import dataset as ds\n", - "\n", - "def create_dataset(data_path, batch_size=32, repeat_size=1,\n", - " num_parallel_workers=1):\n", - " # define dataset\n", - " mnist_ds = ds.MnistDataset(data_path)\n", - "\n", - " # define some parameters needed for data enhancement and rough justification\n", - " resize_height, resize_width = 32, 32\n", - " rescale = 1.0 / 255.0\n", - " shift = 0.0\n", - " rescale_nml = 1 / 0.3081\n", - " shift_nml = -1 * 0.1307 / 0.3081\n", - "\n", - " # according to the parameters, generate the corresponding data enhancement method\n", - " c_trans = [\n", - " vision.Resize((resize_height, resize_width), interpolation=Inter.LINEAR),\n", - " vision.Rescale(rescale_nml, shift_nml),\n", - " vision.Rescale(rescale, shift),\n", - " vision.HWC2CHW()\n", - " ]\n", - " type_cast_op = transforms.TypeCast(ms.int32)\n", - "\n", - " # using map to apply operations to a dataset\n", - " mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns=\"label\", num_parallel_workers=num_parallel_workers)\n", - " mnist_ds = mnist_ds.map(operations=c_trans, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n", - "\n", - " # process the generated dataset\n", - " buffer_size = 10000\n", - " mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size)\n", - " mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)\n", - " mnist_ds = mnist_ds.repeat(repeat_size)\n", - "\n", - " return mnist_ds" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 定义贝叶斯神经网络\n", - "\n", - "在经典LeNet5网络中,数据经过如下计算过程:卷积1->激活->池化->卷积2->激活->池化->降维->全连接1->全连接2->全连接3。 \n", - "本例中将引入概率编程方法,利用`bnn_layers`模块将卷层和全连接层改造成贝叶斯层" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "conv1.weight_posterior.mean\n", - "conv1.weight_posterior.untransformed_std\n", - "conv2.weight_posterior.mean\n", - "conv2.weight_posterior.untransformed_std\n", - "fc1.weight_posterior.mean\n", - "fc1.weight_posterior.untransformed_std\n", - "fc1.bias_posterior.mean\n", - "fc1.bias_posterior.untransformed_std\n", - "fc2.weight_posterior.mean\n", - "fc2.weight_posterior.untransformed_std\n", - "fc2.bias_posterior.mean\n", - "fc2.bias_posterior.untransformed_std\n", - "fc3.weight_posterior.mean\n", - "fc3.weight_posterior.untransformed_std\n", - "fc3.bias_posterior.mean\n", - "fc3.bias_posterior.untransformed_std\n" - ] - } - ], - "source": [ - "import mindspore.nn as nn\n", - "from mindspore.nn.probability import bnn_layers\n", - "import mindspore.ops as ops\n", - "import mindspore as ms\n", - "\n", - "\n", - "class BNNLeNet5(nn.Cell):\n", - " def __init__(self, num_class=10):\n", - " super(BNNLeNet5, self).__init__()\n", - " self.num_class = num_class\n", - " self.conv1 = bnn_layers.ConvReparam(1, 6, 5, stride=1, padding=0, has_bias=False, pad_mode=\"valid\")\n", - " self.conv2 = bnn_layers.ConvReparam(6, 16, 5, stride=1, padding=0, has_bias=False, pad_mode=\"valid\")\n", - " self.fc1 = bnn_layers.DenseReparam(16 * 5 * 5, 120)\n", - " self.fc2 = bnn_layers.DenseReparam(120, 84)\n", - " self.fc3 = bnn_layers.DenseReparam(84, self.num_class)\n", - " self.relu = nn.ReLU()\n", - " self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)\n", - " self.flatten = nn.Flatten()\n", - "\n", - " def construct(self, x):\n", - " x = self.max_pool2d(self.relu(self.conv1(x)))\n", - " x = self.max_pool2d(self.relu(self.conv2(x)))\n", - " x = self.flatten(x)\n", - " x = self.relu(self.fc1(x))\n", - " x = self.relu(self.fc2(x))\n", - " x = self.fc3(x)\n", - " return x\n", - "\n", - "network = BNNLeNet5(num_class=10)\n", - "for layer in network.trainable_params():\n", - " print(layer.name)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "打印信息表明,使用`bnn_layers`模块构建的LeNet网络,其卷积层和全连接层均为贝叶斯层。" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 定义损失函数和优化器\n", - "\n", - "接下来需要定义损失函数(Loss)和优化器(Optimizer)。损失函数是深度学习的训练目标,也叫目标函数,可以理解为神经网络的输出(Logits)和标签(Labels)之间的距离,是一个标量数据。\n", - "\n", - "常见的损失函数包括均方误差、L2损失、Hinge损失、交叉熵等等。图像分类应用通常采用交叉熵损失(CrossEntropy)。\n", - "\n", - "优化器用于神经网络求解(训练)。由于神经网络参数规模庞大,无法直接求解,因而深度学习中采用随机梯度下降算法(SGD)及其改进算法进行求解。MindSpore封装了常见的优化器,如`SGD`、`Adam`、`Momemtum`等等。本例采用`Adam`优化器,通常需要设定两个参数,学习率(`learning_rate`)和权重衰减项(`weight_decay`)。\n", - "\n", - "MindSpore中定义损失函数和优化器的代码样例如下:" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "metadata": {}, - "outputs": [], - "source": [ - "import mindspore.nn as nn\n", - "\n", - "# loss function definition\n", - "criterion = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction=\"mean\")\n", - "\n", - "# optimization definition\n", - "optimizer = nn.AdamWeightDecay(params=network.trainable_params(), learning_rate=0.0001)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 训练网络\n", - "\n", - "贝叶斯神经网络的训练过程与DNN基本相同,唯一不同的是将`WithLossCell`替换为适用于BNN的`WithBNNLossCell`。除了`backbone`和`loss_fn`两个参数之外,`WithBNNLossCell`增加了`dnn_factor`和`bnn_factor`两个参数。这两个参数是用来平衡网络整体损失和贝叶斯层的KL散度的,防止KL散度的值过大掩盖了网络整体损失。\n", - "\n", - "- `dnn_factor`是由损失函数计算得到的网络整体损失的系数。\n", - "- `bnn_factor`是每个贝叶斯层的KL散度的系数。\n", - "\n", - "构建模型训练函数`train_model`和模型验证函数`validate_model`。" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "metadata": {}, - "outputs": [], - "source": [ - "def train_model(train_net, net, dataset):\n", - " accs = []\n", - " loss_sum = 0\n", - " for _, data in enumerate(dataset.create_dict_iterator()):\n", - " train_x = ms.Tensor(data['image'].asnumpy().astype(np.float32))\n", - " label = ms.Tensor(data['label'].asnumpy().astype(np.int32))\n", - " loss = train_net(train_x, label)\n", - " output = net(train_x)\n", - " log_output = ops.LogSoftmax(axis=1)(output)\n", - " acc = np.mean(log_output.asnumpy().argmax(axis=1) == label.asnumpy())\n", - " accs.append(acc)\n", - " loss_sum += loss.asnumpy()\n", - "\n", - " loss_sum = loss_sum / len(accs)\n", - " acc_mean = np.mean(accs)\n", - " return loss_sum, acc_mean\n", - "\n", - "\n", - "def validate_model(net, dataset):\n", - " accs = []\n", - " for _, data in enumerate(dataset.create_dict_iterator()):\n", - " train_x = ms.Tensor(data['image'].asnumpy().astype(np.float32))\n", - " label = ms.Tensor(data['label'].asnumpy().astype(np.int32))\n", - " output = net(train_x)\n", - " log_output = ops.LogSoftmax(axis=1)(output)\n", - " acc = np.mean(log_output.asnumpy().argmax(axis=1) == label.asnumpy())\n", - " accs.append(acc)\n", - "\n", - " acc_mean = np.mean(accs)\n", - " return acc_mean" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "执行训练。" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Epoch: 1 Training Loss: 21444.8605 Training Accuracy: 0.8928 validation Accuracy: 0.9513\n", - "Epoch: 2 Training Loss: 9396.3887 Training Accuracy: 0.9536 validation Accuracy: 0.9635\n", - "Epoch: 3 Training Loss: 7320.2412 Training Accuracy: 0.9641 validation Accuracy: 0.9674\n", - "Epoch: 4 Training Loss: 6221.6970 Training Accuracy: 0.9685 validation Accuracy: 0.9731\n", - "Epoch: 5 Training Loss: 5450.9543 Training Accuracy: 0.9725 validation Accuracy: 0.9733\n", - "Epoch: 6 Training Loss: 4898.9741 Training Accuracy: 0.9754 validation Accuracy: 0.9767\n", - "Epoch: 7 Training Loss: 4505.7502 Training Accuracy: 0.9775 validation Accuracy: 0.9784\n", - "Epoch: 8 Training Loss: 4099.8783 Training Accuracy: 0.9797 validation Accuracy: 0.9791\n", - "Epoch: 9 Training Loss: 3795.2288 Training Accuracy: 0.9810 validation Accuracy: 0.9796\n", - "Epoch: 10 Training Loss: 3581.4254 Training Accuracy: 0.9823 validation Accuracy: 0.9773\n" - ] - } - ], - "source": [ - "from mindspore.nn import TrainOneStepCell\n", - "import mindspore as ms\n", - "import numpy as np\n", - "\n", - "net_with_loss = bnn_layers.WithBNNLossCell(network, criterion, dnn_factor=60000, bnn_factor=0.000001)\n", - "train_bnn_network = TrainOneStepCell(net_with_loss, optimizer)\n", - "train_bnn_network.set_train()\n", - "\n", - "train_set = create_dataset('./datasets/MNIST_Data/train', 64, 1)\n", - "test_set = create_dataset('./datasets/MNIST_Data/test', 64, 1)\n", - "\n", - "epoch = 10\n", - "\n", - "for i in range(epoch):\n", - " train_loss, train_acc = train_model(train_bnn_network, network, train_set)\n", - "\n", - " valid_acc = validate_model(network, test_set)\n", - "\n", - " print('Epoch: {} \\tTraining Loss: {:.4f} \\tTraining Accuracy: {:.4f} \\tvalidation Accuracy: {:.4f}'.\n", - " format(i+1, train_loss, train_acc, valid_acc))" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "MindSpore", - "language": "python", - "name": "mindspore" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.6" - } - }, - "nbformat": 4, - "nbformat_minor": 4 -} diff --git a/docs/recommender/docs/Makefile b/docs/recommender/docs/Makefile deleted file mode 100644 index 1eff8952707bdfa503c8d60c1e9a903053170ba2..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/Makefile +++ /dev/null @@ -1,20 +0,0 @@ -# Minimal makefile for Sphinx documentation -# - -# You can set these variables from the command line, and also -# from the environment for the first two. -SPHINXOPTS ?= -SPHINXBUILD ?= sphinx-build -SOURCEDIR = source_zh_cn -BUILDDIR = build_zh_cn - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/docs/recommender/docs/_ext/customdocumenter.txt b/docs/recommender/docs/_ext/customdocumenter.txt deleted file mode 100644 index 2d37ae41f6772a21da2a7dc5c7bff75128e68330..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/_ext/customdocumenter.txt +++ /dev/null @@ -1,245 +0,0 @@ -import re -import os -from sphinx.ext.autodoc import Documenter - - -class CustomDocumenter(Documenter): - - def document_members(self, all_members: bool = False) -> None: - """Generate reST for member documentation. - - If *all_members* is True, do all members, else those given by - *self.options.members*. - """ - # set current namespace for finding members - self.env.temp_data['autodoc:module'] = self.modname - if self.objpath: - self.env.temp_data['autodoc:class'] = self.objpath[0] - - want_all = all_members or self.options.inherited_members or \ - self.options.members is ALL - # find out which members are documentable - members_check_module, members = self.get_object_members(want_all) - - # **** 排除已写中文接口名 **** - file_path = os.path.join(self.env.app.srcdir, self.env.docname+'.rst') - exclude_re = re.compile(r'(.. py:class::|.. py:function::)\s+(.*?)(\(|\n)') - includerst_re = re.compile(r'.. include::\s+(.*?)\n') - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - excluded_members = exclude_re.findall(content) - if excluded_members: - excluded_members = [i[1].split('.')[-1] for i in excluded_members] - rst_included = includerst_re.findall(content) - if rst_included: - for i in rst_included: - include_path = os.path.join(os.path.dirname(file_path), i) - if os.path.exists(include_path): - with open(include_path, 'r', encoding='utf8') as g: - content_ = g.read() - excluded_member_ = exclude_re.findall(content_) - if excluded_member_: - excluded_member_ = [j[1].split('.')[-1] for j in excluded_member_] - excluded_members.extend(excluded_member_) - - if excluded_members: - if self.options.exclude_members: - self.options.exclude_members |= set(excluded_members) - else: - self.options.exclude_members = excluded_members - - # remove members given by exclude-members - if self.options.exclude_members: - members = [ - (membername, member) for (membername, member) in members - if ( - self.options.exclude_members is ALL or - membername not in self.options.exclude_members - ) - ] - - # document non-skipped members - memberdocumenters = [] # type: List[Tuple[Documenter, bool]] - for (mname, member, isattr) in self.filter_members(members, want_all): - classes = [cls for cls in self.documenters.values() - if cls.can_document_member(member, mname, isattr, self)] - if not classes: - # don't know how to document this member - continue - # prefer the documenter with the highest priority - classes.sort(key=lambda cls: cls.priority) - # give explicitly separated module name, so that members - # of inner classes can be documented - full_mname = self.modname + '::' + \ - '.'.join(self.objpath + [mname]) - documenter = classes[-1](self.directive, full_mname, self.indent) - memberdocumenters.append((documenter, isattr)) - member_order = self.options.member_order or \ - self.env.config.autodoc_member_order - if member_order == 'groupwise': - # sort by group; relies on stable sort to keep items in the - # same group sorted alphabetically - memberdocumenters.sort(key=lambda e: e[0].member_order) - elif member_order == 'bysource' and self.analyzer: - # sort by source order, by virtue of the module analyzer - tagorder = self.analyzer.tagorder - - def keyfunc(entry: Tuple[Documenter, bool]) -> int: - fullname = entry[0].name.split('::')[1] - return tagorder.get(fullname, len(tagorder)) - memberdocumenters.sort(key=keyfunc) - - for documenter, isattr in memberdocumenters: - documenter.generate( - all_members=True, real_modname=self.real_modname, - check_module=members_check_module and not isattr) - - # reset current objects - self.env.temp_data['autodoc:module'] = None - self.env.temp_data['autodoc:class'] = None - - def generate(self, more_content: Any = None, real_modname: str = None, - check_module: bool = False, all_members: bool = False) -> None: - """Generate reST for the object given by *self.name*, and possibly for - its members. - - If *more_content* is given, include that content. If *real_modname* is - given, use that module name to find attribute docs. If *check_module* is - True, only generate if the object is defined in the module name it is - imported from. If *all_members* is True, document all members. - """ - if not self.parse_name(): - # need a module to import - logger.warning( - __('don\'t know which module to import for autodocumenting ' - '%r (try placing a "module" or "currentmodule" directive ' - 'in the document, or giving an explicit module name)') % - self.name, type='autodoc') - return - - # now, import the module and get object to document - if not self.import_object(): - return - - # If there is no real module defined, figure out which to use. - # The real module is used in the module analyzer to look up the module - # where the attribute documentation would actually be found in. - # This is used for situations where you have a module that collects the - # functions and classes of internal submodules. - self.real_modname = real_modname or self.get_real_modname() # type: str - - # try to also get a source code analyzer for attribute docs - try: - self.analyzer = ModuleAnalyzer.for_module(self.real_modname) - # parse right now, to get PycodeErrors on parsing (results will - # be cached anyway) - self.analyzer.find_attr_docs() - except PycodeError as err: - logger.debug('[autodoc] module analyzer failed: %s', err) - # no source file -- e.g. for builtin and C modules - self.analyzer = None - # at least add the module.__file__ as a dependency - if hasattr(self.module, '__file__') and self.module.__file__: - self.directive.filename_set.add(self.module.__file__) - else: - self.directive.filename_set.add(self.analyzer.srcname) - - # check __module__ of object (for members not given explicitly) - if check_module: - if not self.check_module(): - return - - # document members, if possible - self.document_members(all_members) - - -class ModuleDocumenter(CustomDocumenter): - """ - Specialized Documenter subclass for modules. - """ - objtype = 'module' - content_indent = '' - titles_allowed = True - - option_spec = { - 'members': members_option, 'undoc-members': bool_option, - 'noindex': bool_option, 'inherited-members': bool_option, - 'show-inheritance': bool_option, 'synopsis': identity, - 'platform': identity, 'deprecated': bool_option, - 'member-order': identity, 'exclude-members': members_set_option, - 'private-members': bool_option, 'special-members': members_option, - 'imported-members': bool_option, 'ignore-module-all': bool_option - } # type: Dict[str, Callable] - - def __init__(self, *args: Any) -> None: - super().__init__(*args) - merge_members_option(self.options) - - @classmethod - def can_document_member(cls, member: Any, membername: str, isattr: bool, parent: Any - ) -> bool: - # don't document submodules automatically - return False - - def resolve_name(self, modname: str, parents: Any, path: str, base: Any - ) -> Tuple[str, List[str]]: - if modname is not None: - logger.warning(__('"::" in automodule name doesn\'t make sense'), - type='autodoc') - return (path or '') + base, [] - - def parse_name(self) -> bool: - ret = super().parse_name() - if self.args or self.retann: - logger.warning(__('signature arguments or return annotation ' - 'given for automodule %s') % self.fullname, - type='autodoc') - return ret - - def add_directive_header(self, sig: str) -> None: - Documenter.add_directive_header(self, sig) - - sourcename = self.get_sourcename() - - # add some module-specific options - if self.options.synopsis: - self.add_line(' :synopsis: ' + self.options.synopsis, sourcename) - if self.options.platform: - self.add_line(' :platform: ' + self.options.platform, sourcename) - if self.options.deprecated: - self.add_line(' :deprecated:', sourcename) - - def get_object_members(self, want_all: bool) -> Tuple[bool, List[Tuple[str, object]]]: - if want_all: - if (self.options.ignore_module_all or not - hasattr(self.object, '__all__')): - # for implicit module members, check __module__ to avoid - # documenting imported objects - return True, get_module_members(self.object) - else: - memberlist = self.object.__all__ - # Sometimes __all__ is broken... - if not isinstance(memberlist, (list, tuple)) or not \ - all(isinstance(entry, str) for entry in memberlist): - logger.warning( - __('__all__ should be a list of strings, not %r ' - '(in module %s) -- ignoring __all__') % - (memberlist, self.fullname), - type='autodoc' - ) - # fall back to all members - return True, get_module_members(self.object) - else: - memberlist = self.options.members or [] - ret = [] - for mname in memberlist: - try: - ret.append((mname, safe_getattr(self.object, mname))) - except AttributeError: - logger.warning( - __('missing attribute mentioned in :members: or __all__: ' - 'module %s, attribute %s') % - (safe_getattr(self.object, '__name__', '???'), mname), - type='autodoc' - ) - return False, ret diff --git a/docs/recommender/docs/_ext/overwriteobjectiondirective.txt b/docs/recommender/docs/_ext/overwriteobjectiondirective.txt deleted file mode 100644 index e7ffdfe09a737771ead4a9c2ce1d0b945bb49947..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/_ext/overwriteobjectiondirective.txt +++ /dev/null @@ -1,374 +0,0 @@ -""" - sphinx.directives - ~~~~~~~~~~~~~~~~~ - - Handlers for additional ReST directives. - - :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re -import inspect -import importlib -from functools import reduce -from typing import TYPE_CHECKING, Any, Dict, Generic, List, Tuple, TypeVar, cast - -from docutils import nodes -from docutils.nodes import Node -from docutils.parsers.rst import directives, roles - -from sphinx import addnodes -from sphinx.addnodes import desc_signature -from sphinx.deprecation import RemovedInSphinx50Warning, deprecated_alias -from sphinx.util import docutils, logging -from sphinx.util.docfields import DocFieldTransformer, Field, TypedField -from sphinx.util.docutils import SphinxDirective -from sphinx.util.typing import OptionSpec - -if TYPE_CHECKING: - from sphinx.application import Sphinx - - -# RE to strip backslash escapes -nl_escape_re = re.compile(r'\\\n') -strip_backslash_re = re.compile(r'\\(.)') - -T = TypeVar('T') -logger = logging.getLogger(__name__) - -def optional_int(argument: str) -> int: - """ - Check for an integer argument or None value; raise ``ValueError`` if not. - """ - if argument is None: - return None - else: - value = int(argument) - if value < 0: - raise ValueError('negative value; must be positive or zero') - return value - -def get_api(fullname): - """ - 获取接口对象。 - - :param fullname: 接口名全称 - :return: 属性对象或None(如果不存在) - """ - main_module = fullname.split('.')[0] - main_import = importlib.import_module(main_module) - - try: - return reduce(getattr, fullname.split('.')[1:], main_import) - except AttributeError: - return None - -def get_example(name: str): - try: - api_doc = inspect.getdoc(get_api(name)) - example_str = re.findall(r'Examples:\n([\w\W]*?)(\n\n|$)', api_doc) - if not example_str: - return [] - example_str = re.sub(r'\n\s+', r'\n', example_str[0][0]) - example_str = example_str.strip() - example_list = example_str.split('\n') - return ["", "**样例:**", ""] + example_list + [""] - except: - return [] - -def get_platforms(name: str): - try: - api_doc = inspect.getdoc(get_api(name)) - example_str = re.findall(r'Supported Platforms:\n\s+(.*?)\n\n', api_doc) - if not example_str: - example_str_leak = re.findall(r'Supported Platforms:\n\s+(.*)', api_doc) - if example_str_leak: - example_str = example_str_leak[0].strip() - example_list = example_str.split('\n') - example_list = [' ' + example_list[0]] - return ["", "支持平台:"] + example_list + [""] - return [] - example_str = example_str[0].strip() - example_list = example_str.split('\n') - example_list = [' ' + example_list[0]] - return ["", "支持平台:"] + example_list + [""] - except: - return [] - -class ObjectDescription(SphinxDirective, Generic[T]): - """ - Directive to describe a class, function or similar object. Not used - directly, but subclassed (in domain-specific directives) to add custom - behavior. - """ - - has_content = True - required_arguments = 1 - optional_arguments = 0 - final_argument_whitespace = True - option_spec: OptionSpec = { - 'noindex': directives.flag, - } # type: Dict[str, DirectiveOption] - - # types of doc fields that this directive handles, see sphinx.util.docfields - doc_field_types: List[Field] = [] - domain: str = None - objtype: str = None - indexnode: addnodes.index = None - - # Warning: this might be removed in future version. Don't touch this from extensions. - _doc_field_type_map = {} # type: Dict[str, Tuple[Field, bool]] - - def get_field_type_map(self) -> Dict[str, Tuple[Field, bool]]: - if self._doc_field_type_map == {}: - self._doc_field_type_map = {} - for field in self.doc_field_types: - for name in field.names: - self._doc_field_type_map[name] = (field, False) - - if field.is_typed: - typed_field = cast(TypedField, field) - for name in typed_field.typenames: - self._doc_field_type_map[name] = (field, True) - - return self._doc_field_type_map - - def get_signatures(self) -> List[str]: - """ - Retrieve the signatures to document from the directive arguments. By - default, signatures are given as arguments, one per line. - - Backslash-escaping of newlines is supported. - """ - lines = nl_escape_re.sub('', self.arguments[0]).split('\n') - if self.config.strip_signature_backslash: - # remove backslashes to support (dummy) escapes; helps Vim highlighting - return [strip_backslash_re.sub(r'\1', line.strip()) for line in lines] - else: - return [line.strip() for line in lines] - - def handle_signature(self, sig: str, signode: desc_signature) -> Any: - """ - Parse the signature *sig* into individual nodes and append them to - *signode*. If ValueError is raised, parsing is aborted and the whole - *sig* is put into a single desc_name node. - - The return value should be a value that identifies the object. It is - passed to :meth:`add_target_and_index()` unchanged, and otherwise only - used to skip duplicates. - """ - raise ValueError - - def add_target_and_index(self, name: Any, sig: str, signode: desc_signature) -> None: - """ - Add cross-reference IDs and entries to self.indexnode, if applicable. - - *name* is whatever :meth:`handle_signature()` returned. - """ - return # do nothing by default - - def before_content(self) -> None: - """ - Called before parsing content. Used to set information about the current - directive context on the build environment. - """ - pass - - def transform_content(self, contentnode: addnodes.desc_content) -> None: - """ - Called after creating the content through nested parsing, - but before the ``object-description-transform`` event is emitted, - and before the info-fields are transformed. - Can be used to manipulate the content. - """ - pass - - def after_content(self) -> None: - """ - Called after parsing content. Used to reset information about the - current directive context on the build environment. - """ - pass - - def check_class_end(self, content): - for i in content: - if not i.startswith('.. include::') and i != "\n" and i != "": - return False - return True - - def extend_items(self, rst_file, start_num, num): - ls = [] - for i in range(1, num+1): - ls.append((rst_file, start_num+i)) - return ls - - def run(self) -> List[Node]: - """ - Main directive entry function, called by docutils upon encountering the - directive. - - This directive is meant to be quite easily subclassable, so it delegates - to several additional methods. What it does: - - * find out if called as a domain-specific directive, set self.domain - * create a `desc` node to fit all description inside - * parse standard options, currently `noindex` - * create an index node if needed as self.indexnode - * parse all given signatures (as returned by self.get_signatures()) - using self.handle_signature(), which should either return a name - or raise ValueError - * add index entries using self.add_target_and_index() - * parse the content and handle doc fields in it - """ - if ':' in self.name: - self.domain, self.objtype = self.name.split(':', 1) - else: - self.domain, self.objtype = '', self.name - self.indexnode = addnodes.index(entries=[]) - - node = addnodes.desc() - node.document = self.state.document - node['domain'] = self.domain - # 'desctype' is a backwards compatible attribute - node['objtype'] = node['desctype'] = self.objtype - node['noindex'] = noindex = ('noindex' in self.options) - if self.domain: - node['classes'].append(self.domain) - node['classes'].append(node['objtype']) - - self.names: List[T] = [] - signatures = self.get_signatures() - for sig in signatures: - # add a signature node for each signature in the current unit - # and add a reference target for it - signode = addnodes.desc_signature(sig, '') - self.set_source_info(signode) - node.append(signode) - try: - # name can also be a tuple, e.g. (classname, objname); - # this is strictly domain-specific (i.e. no assumptions may - # be made in this base class) - name = self.handle_signature(sig, signode) - except ValueError: - # signature parsing failed - signode.clear() - signode += addnodes.desc_name(sig, sig) - continue # we don't want an index entry here - if name not in self.names: - self.names.append(name) - if not noindex: - # only add target and index entry if this is the first - # description of the object with this name in this desc block - self.add_target_and_index(name, sig, signode) - - contentnode = addnodes.desc_content() - node.append(contentnode) - if self.names: - # needed for association of version{added,changed} directives - self.env.temp_data['object'] = self.names[0] - self.before_content() - try: - example = get_example(self.names[0][0]) - platforms = get_platforms(self.names[0][0]) - except Exception as e: - example = '' - platforms = '' - logger.warning(f'Error API names in {self.arguments[0]}.') - logger.warning(f'{e}') - extra = platforms + example - if extra: - if self.objtype == "method": - self.content.data.extend(extra) - else: - index_num = 0 - for num, i in enumerate(self.content.data): - if i.startswith('.. py:method::') or self.check_class_end(self.content.data[num:]): - index_num = num - break - if index_num: - count = len(self.content.data) - for i in extra: - self.content.data.insert(index_num-count, i) - else: - self.content.data.extend(extra) - try: - self.content.items.extend(self.extend_items(self.content.items[0][0], self.content.items[-1][1], len(extra))) - except Exception as e: - logger.warning(f'{e}') - self.state.nested_parse(self.content, self.content_offset, contentnode) - self.transform_content(contentnode) - self.env.app.emit('object-description-transform', - self.domain, self.objtype, contentnode) - DocFieldTransformer(self).transform_all(contentnode) - self.env.temp_data['object'] = None - self.after_content() - return [self.indexnode, node] - - -class DefaultRole(SphinxDirective): - """ - Set the default interpreted text role. Overridden from docutils. - """ - - optional_arguments = 1 - final_argument_whitespace = False - - def run(self) -> List[Node]: - if not self.arguments: - docutils.unregister_role('') - return [] - role_name = self.arguments[0] - role, messages = roles.role(role_name, self.state_machine.language, - self.lineno, self.state.reporter) - if role: - docutils.register_role('', role) - self.env.temp_data['default_role'] = role_name - else: - literal_block = nodes.literal_block(self.block_text, self.block_text) - reporter = self.state.reporter - error = reporter.error('Unknown interpreted text role "%s".' % role_name, - literal_block, line=self.lineno) - messages += [error] - - return cast(List[nodes.Node], messages) - - -class DefaultDomain(SphinxDirective): - """ - Directive to (re-)set the default domain for this source file. - """ - - has_content = False - required_arguments = 1 - optional_arguments = 0 - final_argument_whitespace = False - option_spec = {} # type: Dict - - def run(self) -> List[Node]: - domain_name = self.arguments[0].lower() - # if domain_name not in env.domains: - # # try searching by label - # for domain in env.domains.values(): - # if domain.label.lower() == domain_name: - # domain_name = domain.name - # break - self.env.temp_data['default_domain'] = self.env.domains.get(domain_name) - return [] - -def setup(app: "Sphinx") -> Dict[str, Any]: - app.add_config_value("strip_signature_backslash", False, 'env') - directives.register_directive('default-role', DefaultRole) - directives.register_directive('default-domain', DefaultDomain) - directives.register_directive('describe', ObjectDescription) - # new, more consistent, name - directives.register_directive('object', ObjectDescription) - - app.add_event('object-description-transform') - - return { - 'version': 'builtin', - 'parallel_read_safe': True, - 'parallel_write_safe': True, - } - diff --git a/docs/recommender/docs/_ext/overwriteviewcode.txt b/docs/recommender/docs/_ext/overwriteviewcode.txt deleted file mode 100644 index 172780ec56b3ed90e7b0add617257a618cf38ee0..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/_ext/overwriteviewcode.txt +++ /dev/null @@ -1,378 +0,0 @@ -""" - sphinx.ext.viewcode - ~~~~~~~~~~~~~~~~~~~ - - Add links to module code in Python object descriptions. - - :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import posixpath -import traceback -import warnings -from os import path -from typing import Any, Dict, Generator, Iterable, Optional, Set, Tuple, cast - -from docutils import nodes -from docutils.nodes import Element, Node - -import sphinx -from sphinx import addnodes -from sphinx.application import Sphinx -from sphinx.builders import Builder -from sphinx.builders.html import StandaloneHTMLBuilder -from sphinx.deprecation import RemovedInSphinx50Warning -from sphinx.environment import BuildEnvironment -from sphinx.locale import _, __ -from sphinx.pycode import ModuleAnalyzer -from sphinx.transforms.post_transforms import SphinxPostTransform -from sphinx.util import get_full_modname, logging, status_iterator -from sphinx.util.nodes import make_refnode - - -logger = logging.getLogger(__name__) - - -OUTPUT_DIRNAME = '_modules' - - -class viewcode_anchor(Element): - """Node for viewcode anchors. - - This node will be processed in the resolving phase. - For viewcode supported builders, they will be all converted to the anchors. - For not supported builders, they will be removed. - """ - - -def _get_full_modname(app: Sphinx, modname: str, attribute: str) -> Optional[str]: - try: - return get_full_modname(modname, attribute) - except AttributeError: - # sphinx.ext.viewcode can't follow class instance attribute - # then AttributeError logging output only verbose mode. - logger.verbose('Didn\'t find %s in %s', attribute, modname) - return None - except Exception as e: - # sphinx.ext.viewcode follow python domain directives. - # because of that, if there are no real modules exists that specified - # by py:function or other directives, viewcode emits a lot of warnings. - # It should be displayed only verbose mode. - logger.verbose(traceback.format_exc().rstrip()) - logger.verbose('viewcode can\'t import %s, failed with error "%s"', modname, e) - return None - - -def is_supported_builder(builder: Builder) -> bool: - if builder.format != 'html': - return False - elif builder.name == 'singlehtml': - return False - elif builder.name.startswith('epub') and not builder.config.viewcode_enable_epub: - return False - else: - return True - - -def doctree_read(app: Sphinx, doctree: Node) -> None: - env = app.builder.env - if not hasattr(env, '_viewcode_modules'): - env._viewcode_modules = {} # type: ignore - - def has_tag(modname: str, fullname: str, docname: str, refname: str) -> bool: - entry = env._viewcode_modules.get(modname, None) # type: ignore - if entry is False: - return False - - code_tags = app.emit_firstresult('viewcode-find-source', modname) - if code_tags is None: - try: - analyzer = ModuleAnalyzer.for_module(modname) - analyzer.find_tags() - except Exception: - env._viewcode_modules[modname] = False # type: ignore - return False - - code = analyzer.code - tags = analyzer.tags - else: - code, tags = code_tags - - if entry is None or entry[0] != code: - entry = code, tags, {}, refname - env._viewcode_modules[modname] = entry # type: ignore - _, tags, used, _ = entry - if fullname in tags: - used[fullname] = docname - return True - - return False - - for objnode in list(doctree.findall(addnodes.desc)): - if objnode.get('domain') != 'py': - continue - names: Set[str] = set() - for signode in objnode: - if not isinstance(signode, addnodes.desc_signature): - continue - modname = signode.get('module') - fullname = signode.get('fullname') - try: - if fullname and modname==None: - if fullname.split('.')[-1].lower() == fullname.split('.')[-1] and fullname.split('.')[-2].lower() != fullname.split('.')[-2]: - modname = '.'.join(fullname.split('.')[:-2]) - fullname = '.'.join(fullname.split('.')[-2:]) - else: - modname = '.'.join(fullname.split('.')[:-1]) - fullname = fullname.split('.')[-1] - fullname_new = fullname - except Exception: - logger.warning(f'error_modename:{modname}') - logger.warning(f'error_fullname:{fullname}') - refname = modname - if env.config.viewcode_follow_imported_members: - new_modname = app.emit_firstresult( - 'viewcode-follow-imported', modname, fullname, - ) - if not new_modname: - new_modname = _get_full_modname(app, modname, fullname) - modname = new_modname - # logger.warning(f'new_modename:{modname}') - if not modname: - continue - # fullname = signode.get('fullname') - # if fullname and modname==None: - fullname = fullname_new - if not has_tag(modname, fullname, env.docname, refname): - continue - if fullname in names: - # only one link per name, please - continue - names.add(fullname) - pagename = posixpath.join(OUTPUT_DIRNAME, modname.replace('.', '/')) - signode += viewcode_anchor(reftarget=pagename, refid=fullname, refdoc=env.docname) - - -def env_merge_info(app: Sphinx, env: BuildEnvironment, docnames: Iterable[str], - other: BuildEnvironment) -> None: - if not hasattr(other, '_viewcode_modules'): - return - # create a _viewcode_modules dict on the main environment - if not hasattr(env, '_viewcode_modules'): - env._viewcode_modules = {} # type: ignore - # now merge in the information from the subprocess - for modname, entry in other._viewcode_modules.items(): # type: ignore - if modname not in env._viewcode_modules: # type: ignore - env._viewcode_modules[modname] = entry # type: ignore - else: - if env._viewcode_modules[modname]: # type: ignore - used = env._viewcode_modules[modname][2] # type: ignore - for fullname, docname in entry[2].items(): - if fullname not in used: - used[fullname] = docname - - -def env_purge_doc(app: Sphinx, env: BuildEnvironment, docname: str) -> None: - modules = getattr(env, '_viewcode_modules', {}) - - for modname, entry in list(modules.items()): - if entry is False: - continue - - code, tags, used, refname = entry - for fullname in list(used): - if used[fullname] == docname: - used.pop(fullname) - - if len(used) == 0: - modules.pop(modname) - - -class ViewcodeAnchorTransform(SphinxPostTransform): - """Convert or remove viewcode_anchor nodes depends on builder.""" - default_priority = 100 - - def run(self, **kwargs: Any) -> None: - if is_supported_builder(self.app.builder): - self.convert_viewcode_anchors() - else: - self.remove_viewcode_anchors() - - def convert_viewcode_anchors(self) -> None: - for node in self.document.findall(viewcode_anchor): - anchor = nodes.inline('', _('[源代码]'), classes=['viewcode-link']) - refnode = make_refnode(self.app.builder, node['refdoc'], node['reftarget'], - node['refid'], anchor) - node.replace_self(refnode) - - def remove_viewcode_anchors(self) -> None: - for node in list(self.document.findall(viewcode_anchor)): - node.parent.remove(node) - - -def missing_reference(app: Sphinx, env: BuildEnvironment, node: Element, contnode: Node - ) -> Optional[Node]: - # resolve our "viewcode" reference nodes -- they need special treatment - if node['reftype'] == 'viewcode': - warnings.warn('viewcode extension is no longer use pending_xref node. ' - 'Please update your extension.', RemovedInSphinx50Warning) - return make_refnode(app.builder, node['refdoc'], node['reftarget'], - node['refid'], contnode) - - return None - - -def get_module_filename(app: Sphinx, modname: str) -> Optional[str]: - """Get module filename for *modname*.""" - source_info = app.emit_firstresult('viewcode-find-source', modname) - if source_info: - return None - else: - try: - filename, source = ModuleAnalyzer.get_module_source(modname) - return filename - except Exception: - return None - - -def should_generate_module_page(app: Sphinx, modname: str) -> bool: - """Check generation of module page is needed.""" - module_filename = get_module_filename(app, modname) - if module_filename is None: - # Always (re-)generate module page when module filename is not found. - return True - - builder = cast(StandaloneHTMLBuilder, app.builder) - basename = modname.replace('.', '/') + builder.out_suffix - page_filename = path.join(app.outdir, '_modules/', basename) - - try: - if path.getmtime(module_filename) <= path.getmtime(page_filename): - # generation is not needed if the HTML page is newer than module file. - return False - except IOError: - pass - - return True - - -def collect_pages(app: Sphinx) -> Generator[Tuple[str, Dict[str, Any], str], None, None]: - env = app.builder.env - if not hasattr(env, '_viewcode_modules'): - return - if not is_supported_builder(app.builder): - return - highlighter = app.builder.highlighter # type: ignore - urito = app.builder.get_relative_uri - - modnames = set(env._viewcode_modules) # type: ignore - - for modname, entry in status_iterator( - sorted(env._viewcode_modules.items()), # type: ignore - __('highlighting module code... '), "blue", - len(env._viewcode_modules), # type: ignore - app.verbosity, lambda x: x[0]): - if not entry: - continue - if not should_generate_module_page(app, modname): - continue - - code, tags, used, refname = entry - # construct a page name for the highlighted source - pagename = posixpath.join(OUTPUT_DIRNAME, modname.replace('.', '/')) - # highlight the source using the builder's highlighter - if env.config.highlight_language in ('python3', 'default', 'none'): - lexer = env.config.highlight_language - else: - lexer = 'python' - highlighted = highlighter.highlight_block(code, lexer, linenos=False) - # split the code into lines - lines = highlighted.splitlines() - # split off wrap markup from the first line of the actual code - before, after = lines[0].split('
    ')
    -        lines[0:1] = [before + '
    ', after]
    -        # nothing to do for the last line; it always starts with 
    anyway - # now that we have code lines (starting at index 1), insert anchors for - # the collected tags (HACK: this only works if the tag boundaries are - # properly nested!) - maxindex = len(lines) - 1 - for name, docname in used.items(): - type, start, end = tags[name] - backlink = urito(pagename, docname) + '#' + refname + '.' + name - lines[start] = ( - '
    %s' % (name, backlink, _('[文档]')) + - lines[start]) - lines[min(end, maxindex)] += '
    ' - # try to find parents (for submodules) - parents = [] - parent = modname - while '.' in parent: - parent = parent.rsplit('.', 1)[0] - if parent in modnames: - parents.append({ - 'link': urito(pagename, - posixpath.join(OUTPUT_DIRNAME, parent.replace('.', '/'))), - 'title': parent}) - parents.append({'link': urito(pagename, posixpath.join(OUTPUT_DIRNAME, 'index')), - 'title': _('Module code')}) - parents.reverse() - # putting it all together - context = { - 'parents': parents, - 'title': modname, - 'body': (_('

    Source code for %s

    ') % modname + - '\n'.join(lines)), - } - yield (pagename, context, 'page.html') - - if not modnames: - return - - html = ['\n'] - # the stack logic is needed for using nested lists for submodules - stack = [''] - for modname in sorted(modnames): - if modname.startswith(stack[-1]): - stack.append(modname + '.') - html.append('
      ') - else: - stack.pop() - while not modname.startswith(stack[-1]): - stack.pop() - html.append('
    ') - stack.append(modname + '.') - html.append('
  • %s
  • \n' % ( - urito(posixpath.join(OUTPUT_DIRNAME, 'index'), - posixpath.join(OUTPUT_DIRNAME, modname.replace('.', '/'))), - modname)) - html.append('' * (len(stack) - 1)) - context = { - 'title': _('Overview: module code'), - 'body': (_('

    All modules for which code is available

    ') + - ''.join(html)), - } - - yield (posixpath.join(OUTPUT_DIRNAME, 'index'), context, 'page.html') - - -def setup(app: Sphinx) -> Dict[str, Any]: - app.add_config_value('viewcode_import', None, False) - app.add_config_value('viewcode_enable_epub', False, False) - app.add_config_value('viewcode_follow_imported_members', True, False) - app.connect('doctree-read', doctree_read) - app.connect('env-merge-info', env_merge_info) - app.connect('env-purge-doc', env_purge_doc) - app.connect('html-collect-pages', collect_pages) - app.connect('missing-reference', missing_reference) - # app.add_config_value('viewcode_include_modules', [], 'env') - # app.add_config_value('viewcode_exclude_modules', [], 'env') - app.add_event('viewcode-find-source') - app.add_event('viewcode-follow-imported') - app.add_post_transform(ViewcodeAnchorTransform) - return { - 'version': sphinx.__display_version__, - 'env_version': 1, - 'parallel_read_safe': True - } diff --git a/docs/recommender/docs/_ext/rename_include.py b/docs/recommender/docs/_ext/rename_include.py deleted file mode 100644 index bf7dea25f3ee7fd371659e80a3551439fbddee5a..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/_ext/rename_include.py +++ /dev/null @@ -1,60 +0,0 @@ -"""Rename .rst file to .txt file for include directive.""" -import os -import re -import glob -import logging - -logging.basicConfig(level=logging.WARNING, format='%(message)s') -logger = logging.getLogger(__name__) - -origin = "rst" -replace = "txt" - -include_re = re.compile(r'\.\. include::\s+(.*?)(\.rst|\.txt)') -include_re_sub = re.compile(rf'(\.\. include::\s+(.*?))\.{origin}') - -# Specified file_name lists excluded from rename procedure. -whitepaper = ['operations.rst'] - -def repl(matchobj): - """Replace functions for matched.""" - if matchobj.group(2).split('/')[-1] + f'.{origin}' in whitepaper: - return matchobj.group(0) - return rf'{matchobj.group(1)}.{replace}' - -def rename_include(api_dir): - """ - Rename .rst file to .txt file for include directive. - - api_dir - api path relative. - """ - tar = [] - for root, _, files in os.walk(api_dir): - for file in files: - if not file.endswith('.rst'): - continue - try: - with open(os.path.join(root, file), 'r+', encoding='utf-8') as f: - content = f.read() - tar_ = include_re.findall(content) - if tar_: - tar_ = [i[0].split('/')[-1]+f'.{origin}' for i in tar_] - tar.extend(tar_) - sub = include_re_sub.findall(content) - if sub: - content_ = include_re_sub.sub(repl, content) - f.seek(0) - f.truncate() - f.write(content_) - except UnicodeDecodeError: - # pylint: disable=logging-fstring-interpolation - logger.warning(f"UnicodeDecodeError for: {file}") - - all_rst = glob.glob(f'{api_dir}/**/*.{origin}', recursive=True) - - for i in all_rst: - if os.path.dirname(i).endswith("api_python") or os.path.basename(i) in whitepaper: - continue - name = os.path.basename(i) - if name in tar: - os.rename(i, i.replace(f'.{origin}', f'.{replace}')) diff --git a/docs/recommender/docs/requirements.txt b/docs/recommender/docs/requirements.txt deleted file mode 100644 index a1b6a69f6dbd9c6f78710f56889e14f0e85b27f4..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/requirements.txt +++ /dev/null @@ -1,7 +0,0 @@ -sphinx == 4.4.0 -docutils == 0.17.1 -myst-parser == 0.18.1 -sphinx_rtd_theme == 1.0.0 -numpy -IPython -jieba diff --git a/docs/recommender/docs/source_en/conf.py b/docs/recommender/docs/source_en/conf.py deleted file mode 100644 index d3893865bf5905f0f493a7e2e9f10dffa4157170..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/source_en/conf.py +++ /dev/null @@ -1,154 +0,0 @@ -# Configuration file for the Sphinx documentation builder. -# -# This file only contains a selection of the most common options. For a full -# list see the documentation: -# https://www.sphinx-doc.org/en/master/usage/configuration.html - -# -- Path setup -------------------------------------------------------------- - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# -import os -import shutil -import sys -import IPython -import re -sys.path.append(os.path.abspath('../_ext')) -from sphinx.ext import autodoc as sphinx_autodoc - -import mindspore_rec - -# -- Project information ----------------------------------------------------- - -project = 'MindSpore Recommender' -copyright = 'MindSpore' -author = 'MindSpore' - -# The full version, including alpha/beta/rc tags -release = 'master' - - -# -- General configuration --------------------------------------------------- - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -myst_enable_extensions = ["dollarmath", "amsmath"] - - -myst_heading_anchors = 5 -extensions = [ - 'sphinx.ext.autodoc', - 'sphinx.ext.doctest', - 'sphinx.ext.intersphinx', - 'sphinx.ext.todo', - 'sphinx.ext.coverage', - 'sphinx.ext.napoleon', - 'sphinx.ext.viewcode', - 'myst_parser', - 'sphinx.ext.mathjax', - 'IPython.sphinxext.ipython_console_highlighting' -] - -source_suffix = { - '.rst': 'restructuredtext', - '.md': 'markdown', -} - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This pattern also affects html_static_path and html_extra_path. -mathjax_path = 'https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/mathjax/MathJax-3.2.2/es5/tex-mml-chtml.js' - -mathjax_options = { - 'async':'async' -} - -smartquotes_action = 'De' - -exclude_patterns = [] - -pygments_style = 'sphinx' - -# -- Options for HTML output ------------------------------------------------- - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = 'sphinx_rtd_theme' - -import sphinx_rtd_theme -layout_target = os.path.join(os.path.dirname(sphinx_rtd_theme.__file__), 'layout.html') -layout_src = '../../../../resource/_static/layout.html' -if os.path.exists(layout_target): - os.remove(layout_target) -shutil.copy(layout_src, layout_target) - -html_search_language = 'en' - -# Example configuration for intersphinx: refer to the Python standard library. -intersphinx_mapping = { - 'python': ('https://docs.python.org/3', '../../../../resource/python_objects.inv'), -} - -# Modify default signatures for autodoc. -autodoc_source_path = os.path.abspath(sphinx_autodoc.__file__) -autodoc_source_re = re.compile(r'stringify_signature\(.*?\)') -get_param_func_str = r"""\ -import re -import inspect as inspect_ - -def get_param_func(func): - try: - source_code = inspect_.getsource(func) - if func.__doc__: - source_code = source_code.replace(func.__doc__, '') - all_params_str = re.findall(r"def [\w_\d\-]+\(([\S\s]*?)(\):|\) ->.*?:)", source_code) - all_params = re.sub("(self|cls)(,|, )?", '', all_params_str[0][0].replace("\n", "").replace("'", "\"")) - return all_params - except: - return '' - -def get_obj(obj): - if isinstance(obj, type): - return obj.__init__ - - return obj -""" - -with open(autodoc_source_path, "r+", encoding="utf8") as f: - code_str = f.read() - code_str = autodoc_source_re.sub('"(" + get_param_func(get_obj(self.object)) + ")"', code_str, count=0) - exec(get_param_func_str, sphinx_autodoc.__dict__) - exec(code_str, sphinx_autodoc.__dict__) - -sys.path.append(os.path.abspath('../../../../resource/sphinx_ext')) -# import anchor_mod -import nbsphinx_mod - -sys.path.append(os.path.abspath('../../../../resource/search')) -import search_code - -sys.path.append(os.path.abspath('../../../../resource/custom_directives')) -from custom_directives import IncludeCodeDirective - -def setup(app): - app.add_directive('includecode', IncludeCodeDirective) - -try: - src_release = os.path.join(os.getenv("RD_PATH"), 'RELEASE.md') - des_release = "./RELEASE.md" - with open(src_release, "r", encoding="utf-8") as f: - data = f.read() - if len(re.findall("\n## (.*?)\n",data)) > 1: - content = re.findall("(## [\s\S\n]*?)\n## ", data) - else: - content = re.findall("(## [\s\S\n]*)", data) - # result = content[0].replace('# MindSpore', '#', 1) - with open(des_release, "w", encoding="utf-8") as p: - p.write("# Release Notes"+"\n\n") - p.write(content[0]) -except Exception as e: - print('release文件拷贝失败,原因是:',e) \ No newline at end of file diff --git a/docs/recommender/docs/source_en/images/architecture.png b/docs/recommender/docs/source_en/images/architecture.png deleted file mode 100644 index ea84b417d855562fd4696482e6dd91cc88338712..0000000000000000000000000000000000000000 Binary files a/docs/recommender/docs/source_en/images/architecture.png and /dev/null differ diff --git a/docs/recommender/docs/source_en/images/offline_training.png b/docs/recommender/docs/source_en/images/offline_training.png deleted file mode 100644 index 41eac8a2105a981227866823e209cd04e8ccb391..0000000000000000000000000000000000000000 Binary files a/docs/recommender/docs/source_en/images/offline_training.png and /dev/null differ diff --git a/docs/recommender/docs/source_en/images/online_training.png b/docs/recommender/docs/source_en/images/online_training.png deleted file mode 100644 index 6b14cce2cbed19eac7a0ed4fbe7f816d0ab41600..0000000000000000000000000000000000000000 Binary files a/docs/recommender/docs/source_en/images/online_training.png and /dev/null differ diff --git a/docs/recommender/docs/source_en/index.rst b/docs/recommender/docs/source_en/index.rst deleted file mode 100644 index 61b8a4f88195c47f2e0eae37a3efd24012649082..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/source_en/index.rst +++ /dev/null @@ -1,45 +0,0 @@ -MindSpore Recommender Documents -================================ - -MindSpore Recommender is an open source training acceleration library based on the MindSpore framework for the recommendation domain. With MindSpore's large-scale heterogeneous computing acceleration capability, MindSpore Recommender supports efficient training of large-scale dynamic features for online and offline scenarios. - -.. raw:: html - -

    - -The MindSpore Recommender acceleration library consists of the following components: - -- online training: implements online training of real-time data and incremental model updates by streaming data from real-time data sources (e.g., Kafka) and online real-time data processing to support business scenarios that require real-time model updates. -- offline training: for the traditional offline dataset training scenario, it supports the training of recommendation models containing large-scale feature vectors through automatic parallelism, distributed feature caching, heterogeneous acceleration and other technical solutions. -- data processing: MindSpore Pandas and MindData provide the ability to read and process data online and offline, saving the overhead of multiple languages and frameworks through full-Python expression support, and opening up efficient data flow links for data processing and model training. -- model library: includes continuous rich training of typical recommendation models. After rigorous validation for accuracy and performance, it can be used right after installation. - -Code repository address: - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: Installation - - install - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: Guide - - offline_learning - online_learning - -.. toctree:: - :maxdepth: 1 - :caption: API References - - recommender - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: RELEASE NOTES - - RELEASE diff --git a/docs/recommender/docs/source_en/install.md b/docs/recommender/docs/source_en/install.md deleted file mode 100644 index 3c757970deab69b957a28393e80af6716369322e..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/source_en/install.md +++ /dev/null @@ -1,36 +0,0 @@ -# MindSpore Recommender Installation - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/recommender/docs/source_en/install.md) - -MindSpore Recommender relies on the MindSpore training framework, so after installing [MindSpore](https://gitee.com/mindspore/mindspore/blob/master/README.md#installation), install MindSpore Recommender. You can use either a pip installation or a source code compilation installation. - -## Installing from pip Command - -To install through the pip command, download and install the whl package from the [MindSpore Recommender download page](https://www.mindspore.cn/versions). - -```shell -pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{ms_version}/Recommender/any/mindspore_rec-{mr_version}-py3-none-any.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple -``` - -- The dependencies of the MindSpore Recommender installation package will be downloaded automatically when the whl package is installed while the network is connected (see requirement.txt for details of the dependencies), otherwise you need to install them yourself. -- `{ms_version}` indicates the MindSpore version number that matches the MindSpore Recommender. -- `{mr_version}` indicates the version number of MindSpore Recommender, for example, when downloading version 0.2.0 of MindSpore Recommender, `{mr_version}` should be set as 0.2.0. - -## Installing from Source Code - -Download the [source code](https://github.com/mindspore-lab/mindrec) and go to the `mindrec` directory after downloading. - -```shell -bash build.sh -pip install output/mindspore_rec-0.2.0-py3-none-any.whl -``` - -`build.sh` is the compilation script file in the `recommender` directory. - -## Verification - -Execute the following command to verify the installation result. The installation is successful if no error is reported when importing Python modules. - -```python -import mindspore_rec -``` diff --git a/docs/recommender/docs/source_en/offline_learning.md b/docs/recommender/docs/source_en/offline_learning.md deleted file mode 100644 index d9e290646b779b129827b5511beb7877e3e211de..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/source_en/offline_learning.md +++ /dev/null @@ -1,17 +0,0 @@ -# Offline Training - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/recommender/docs/source_en/offline_learning.md) - -## Overview - -One of the main challenges of recommendation model training is the storage and training of large-scale feature vectors. MindSpore Recommender provides a perfect solution for training large-scale feature vectors for offline scenarios. - -## Overall Architecture - -The training architecture for large-scale feature vectors in recommendation models is shown in the figure below, in which the core adopts the technical scheme of distributed multi-level Embedding Cache. The distributed parallel technology of multi-machine and multi-card based on model parallelism implements large-scale and low-cost recommendation training of large models. - -![image.png](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/recommender/docs/source_en/images/offline_training.png) - -## Example - -[Wide&Deep distributed training](https://github.com/mindspore-lab/mindrec/tree/master/models/wide_deep) diff --git a/docs/recommender/docs/source_en/online_learning.md b/docs/recommender/docs/source_en/online_learning.md deleted file mode 100644 index b06e58847548b2f35c648507f045e6badc367755..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/source_en/online_learning.md +++ /dev/null @@ -1,212 +0,0 @@ -# Online Learning - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/recommender/docs/source_en/online_learning.md) - -## Overview - -The real-time update of the recommendation network model is one of the important technical indicators, and online learning can effectively improve the real-time update of the recommendation network model. - -Key differences between online learning and offline training: - -1. The dataset for online learning is streaming data with no definite dataset size, epoch, while the dataset for offline training has a definite dataset size, epoch. -2. Online learning is in the form of a resident service, while the offline training exits tasks at the end of offline training. -3. Online learning requires collecting and storing training data, and driving the training process after a fixed amount of data has been collected or a fixed time window has elapsed. - -## Overall Architecture - -The user's streaming training data is pushed to Kafka. MindSpore Pandas reads data from Kafka and performs feature engineering transformation, and then writes to the feature storage engine. MindData reads data from the storage engine as training data for training. MindSpore, as a service resident, continuously receives data and performs training, with the overall process shown in the following figure: - -![image.png](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/recommender/docs/source_en/images/online_training.png) - -## Use Constraints - -- Python 3.8 and above is required to be installed. -- Currently only GPU training, Linux operating system are supported. - -## Python Package Dependencies - -mindpandas v0.1.0 - -mindspore_rec v0.2.0 - -kafka-python v2.0.2 - -## Example - -The following is an example of the process of online learning with the Criteo dataset training Wide&Deep. The sample code is located at [Online Learning](https://github.com/mindspore-lab/mindrec/tree/master/examples/online_learning). - -MindSpore Recommender provides a specialized algorithm model `RecModel` for online learning, which is combined with MindSpore Pandas, a real-time data source Kafka for data reading and feature processing, to implement a simple online learning process. -First define a custom dataset for real-time data processing, where the constructor parameter `receiver` is of type `DataReceiver` in MindPands for receiving real-time data, and `__getitem__` means read data one at a time. - -```python -class StreamingDataset: - def __init__(self, receiver): - self.data_ = [] - self.receiver_ = receiver - - def __getitem__(self, item): - while not self.data_: - data = self.receiver_.recv() - if data is not None: - self.data_ = data.tolist() - - last_row = self.data_.pop() - return np.array(last_row[0], dtype=np.int32), np.array(last_row[1], dtype=np.float32), np.array(last_row[2], dtype=np.float32) -``` - -Then the above custom dataset is encapsulated into the online dataset required by `RecModel`. - -```python -from mindpandas.channel import DataReceiver -from mindspore_rec import RecModel as Model - -receiver = DataReceiver(address=config.address, namespace=config.namespace, - dataset_name=config.dataset_name, shard_id=0) -stream_dataset = StreamingDataset(receiver) - -dataset = ds.GeneratorDataset(stream_dataset, column_names=["id", "weight", "label"]) -dataset = dataset.batch(config.batch_size) - -train_net, _ = GetWideDeepNet(config) -train_net.set_train() - -model = Model(train_net) -``` - -After configuring the export strategy for the model Checkpoint, start the online training process. - -```python -ckptconfig = CheckpointConfig(save_checkpoint_steps=100, keep_checkpoint_max=5) -ckpoint_cb = ModelCheckpoint(prefix='widedeep_train', directory="./ckpt", config=ckptconfig) - -model.online_train(dataset, callbacks=[TimeMonitor(1), callback, ckpoint_cb], dataset_sink_mode=True) -``` - -The following describes the start process for each module involved in the online learning process: - -### Downloading Kafka - -```bash -wget https://archive.apache.org/dist/kafka/3.2.0/kafka_2.13-3.2.0.tgz -tar -xzf kafka_2.13-3.2.0.tgz -cd kafka_2.13-3.2.0 -``` - -To install other versions, please refer to . - -### Starting kafka-zookeeper - -```bash -bin/zookeeper-server-start.sh config/zookeeper.properties -``` - -### Starting kafka-server - -Open another command terminal and start the kafka service. - -```bash -bin/kafka-server-start.sh config/server.properties -``` - -### Starting kafka_client - -Enter into the recommender repo online learning example directory and start kafka_client, kafka_client needs to be started only once, and you can use kafka to set the number of partitions corresponding to the topic. - -```bash -cd recommender/examples/online_learning -python kafka_client.py -``` - -### Starting a Distributed Computing Engine - -```bash -yrctl start --master --address $MASTER_HOST_IP - -# Parameter description -# --master: indicates that the current host is the master node. Non-master nodes do not need to specify the '--master' parameter -# --address: ip of master node -``` - -### Starting Data producer - -producer is used to simulate an online learning scenario where a local criteo dataset is written to Kafka for use by the consumer. The current sample uses multiple processes to read two files and write the data to Kafka. - -```bash -python producer.py --file1=$CRITEO_DATASET_FILE_PATH --file2=$CRITEO_DATASET_FILE_PATH -# Parameter description -# --file1: Path to the local disk for the criteo dataset -# --file2: Path to the local disk for the criteo dataset -# The above files are all Criteo original dataset text files, file1 and file2 can be processed concurrently, file1 and file2 can be the same or different, if they are the same, it is equivalent to each sample in the file being used twice. -``` - -### Starting Data consumer - -```bash -python consumer.py --num_shards=$DEVICE_NUM --address=$LOCAL_HOST_IP --dataset_name=$DATASET_NAME - --max_dict=$PATH_TO_VAL_MAX_DICT --min_dict=$PATH_TO_VAL_MIN_DICT --map_dict=$PATH_TO_CAT_TO_ID_DICT - -# Parameter description -# --num_shards: The number of device cards on the corresponding training side is set to 1 for single-card training and 8 for 8-card training. -# --address: address of current sender -# --dataset_name: dataset name -# --namespace: channel name -# --max_dict: Maximum dictionary of dense feature columns -# --min_dict: Minimum dictionary of dense feature columns -# --map_dict: Dictionary of sparse feature columns -``` - -The consumer needs 3 dataset-related files for feature engineering of criteo dataset: `all_val_max_dict.pkl`, `all_val_min_dict.pkl` and `cat2id_dict.pkl`. `$PATH_TO_VAL_MAX_DICT`, `$PATH_TO_VAL_MIN_DICT` and `$PATH_TO_CAT_TO_ID_DICT`, which are the absolute paths to these files on the environment, respectively. The specific production method of these 3 PKL files can be found in [process_data.py](https://github.com/mindspore-lab/mindrec/blob/master/datasets/criteo_1tb/process_data.py), switching the original criteo dataset to produce the corresponding .pkl files. - -### Starting Online Training - -For fhe yaml used by config, please refer to [default_config.yaml](https://github.com/mindspore-lab/mindrec/blob/master/examples/online_learning/default_config.yaml). - -Single-card training: - -```bash -python online_train.py --address=$LOCAL_HOST_IP --dataset_name=criteo - -# Parameter description: -# --address: Local host ip. Receiving training data from MindSpore Pandas requires configuration -# --dataset_name: Dataset name, consistent with the consumer module -``` - -Start with multi-card training MPI mode: - -```bash -bash mpirun_dist_online_train.sh [$RANK_SIZE] [$LOCAL_HOST_IP] - -# Parameter description: -# RANK_SIZE: Number of multi-card training cards -# LOCAL_HOST_IP: Local host ip for MindSpore Pandas to receive training data -``` - -Dynamic networking method to start multi-card training: - -```bash -bash run_dist_online_train.sh [$WORKER_NUM] [$SHED_HOST] [$SCHED_PORT] [$LOCAL_HOST_IP] - -# Parameter description: -# WORKER_NUM: Number of multi-card training cards -# SHED_HOST: IP of the Scheduler role required for MindSpore dynamic networking -# SCHED_PORT: Port of the Scheduler role required for MindSpore dynamic networking -# LOCAL_HOST_IP: Local host ip. Receiving training data from MindSpore Pandas requires configuration -``` - -When training is successfully started, the following log is output: - -epoch and step represent the number of epoch and step corresponding to the current training step, and wide_loss and deep_loss represent the training loss values in the wide&deep network. - -```text -epoch: 1, step: 1, wide_loss: 0.66100323, deep_loss: 0.72502613 -epoch: 1, step: 2, wide_loss: 0.46781272, deep_loss: 0.5293098 -epoch: 1, step: 3, wide_loss: 0.363207, deep_loss: 0.42204413 -epoch: 1, step: 4, wide_loss: 0.3051032, deep_loss: 0.36126155 -epoch: 1, step: 5, wide_loss: 0.24045062, deep_loss: 0.29395688 -epoch: 1, step: 6, wide_loss: 0.24296054, deep_loss: 0.29386574 -epoch: 1, step: 7, wide_loss: 0.20943595, deep_loss: 0.25780612 -epoch: 1, step: 8, wide_loss: 0.19562452, deep_loss: 0.24153553 -epoch: 1, step: 9, wide_loss: 0.16500896, deep_loss: 0.20854339 -epoch: 1, step: 10, wide_loss: 0.2188702, deep_loss: 0.26011512 -epoch: 1, step: 11, wide_loss: 0.14963374, deep_loss: 0.18867904 -``` diff --git a/docs/recommender/docs/source_en/recommender.rst b/docs/recommender/docs/source_en/recommender.rst deleted file mode 100644 index 8e71acbfd8df9828f562a44f08b5c4906fd936c3..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/source_en/recommender.rst +++ /dev/null @@ -1,9 +0,0 @@ -mindspore_rec -=============== - -.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg - :target: https://gitee.com/mindspore/docs/blob/master/docs/recommender/docs/source_en/recommender.rst - :alt: View Source On Gitee - -.. autoclass:: mindspore_rec.RecModel - :members: \ No newline at end of file diff --git a/docs/recommender/docs/source_zh_cn/conf.py b/docs/recommender/docs/source_zh_cn/conf.py deleted file mode 100644 index fa050a14c578122da2feda353ef0a717c99bbd62..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/source_zh_cn/conf.py +++ /dev/null @@ -1,237 +0,0 @@ -# Configuration file for the Sphinx documentation builder. -# -# This file only contains a selection of the most common options. For a full -# list see the documentation: -# https://www.sphinx-doc.org/en/master/usage/configuration.html - -# -- Path setup -------------------------------------------------------------- - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# -import os -import shutil -import sys -import IPython -import re -sys.path.append(os.path.abspath('../_ext')) -from sphinx.ext import autodoc as sphinx_autodoc - - -# -- Project information ----------------------------------------------------- - -project = 'MindSpore Recommender' -copyright = 'MindSpore' -author = 'MindSpore' - -# The full version, including alpha/beta/rc tags -release = 'master' - - -# -- General configuration --------------------------------------------------- - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -myst_enable_extensions = ["dollarmath", "amsmath"] - - -myst_heading_anchors = 5 -extensions = [ - 'sphinx.ext.autodoc', - 'sphinx.ext.doctest', - 'sphinx.ext.intersphinx', - 'sphinx.ext.todo', - 'sphinx.ext.coverage', - 'sphinx.ext.napoleon', - 'sphinx.ext.viewcode', - 'myst_parser', - 'sphinx.ext.mathjax', - 'IPython.sphinxext.ipython_console_highlighting' -] - -source_suffix = { - '.rst': 'restructuredtext', - '.md': 'markdown', -} - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This pattern also affects html_static_path and html_extra_path. -mathjax_path = 'https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/mathjax/MathJax-3.2.2/es5/tex-mml-chtml.js' - -mathjax_options = { - 'async':'async' -} - -smartquotes_action = 'De' - -exclude_patterns = [] - -pygments_style = 'sphinx' - -# -- Options for HTML output ------------------------------------------------- - -# Reconstruction of sphinx auto generated document translation. -language = 'zh_CN' -locale_dirs = ['../../../../resource/locale/'] -gettext_compact = False - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = 'sphinx_rtd_theme' - -html_search_language = 'zh' - -html_search_options = {'dict': '../../../resource/jieba.txt'} - -# Example configuration for intersphinx: refer to the Python standard library. -intersphinx_mapping = { - 'python': ('https://docs.python.org/3', '../../../../resource/python_objects.inv'), -} - -from sphinx import directives -with open('../_ext/overwriteobjectiondirective.txt', 'r', encoding="utf8") as f: - exec(f.read(), directives.__dict__) - -from sphinx.ext import viewcode -with open('../_ext/overwriteviewcode.txt', 'r', encoding="utf8") as f: - exec(f.read(), viewcode.__dict__) - -# Modify default signatures for autodoc. -autodoc_source_path = os.path.abspath(sphinx_autodoc.__file__) -autodoc_source_re = re.compile(r'stringify_signature\(.*?\)') -get_param_func_str = r"""\ -import re -import inspect as inspect_ - -def get_param_func(func): - try: - source_code = inspect_.getsource(func) - if func.__doc__: - source_code = source_code.replace(func.__doc__, '') - all_params_str = re.findall(r"def [\w_\d\-]+\(([\S\s]*?)(\):|\) ->.*?:)", source_code) - all_params = re.sub("(self|cls)(,|, )?", '', all_params_str[0][0].replace("\n", "").replace("'", "\"")) - return all_params - except: - return '' - -def get_obj(obj): - if isinstance(obj, type): - return obj.__init__ - - return obj -""" - -with open(autodoc_source_path, "r+", encoding="utf8") as f: - code_str = f.read() - code_str = autodoc_source_re.sub('"(" + get_param_func(get_obj(self.object)) + ")"', code_str, count=0) - exec(get_param_func_str, sphinx_autodoc.__dict__) - exec(code_str, sphinx_autodoc.__dict__) - -# Copy source files of chinese python api from mindpandas repository. -from sphinx.util import logging -logger = logging.getLogger(__name__) - -copy_path = 'docs/api/api_python' -src_dir = os.path.join(os.getenv("RD_PATH"), copy_path) - -copy_list = [] - -present_path = os.path.dirname(__file__) - -for i in os.listdir(src_dir): - if os.path.isfile(os.path.join(src_dir,i)): - if os.path.exists('./'+i): - os.remove('./'+i) - shutil.copy(os.path.join(src_dir,i),'./'+i) - copy_list.append(os.path.join(present_path,i)) - else: - if os.path.exists('./'+i): - shutil.rmtree('./'+i) - shutil.copytree(os.path.join(src_dir,i),'./'+i) - copy_list.append(os.path.join(present_path,i)) - -# add view -import json - -if os.path.exists('../../../../tools/generate_html/version.json'): - with open('../../../../tools/generate_html/version.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) -elif os.path.exists('../../../../tools/generate_html/daily_dev.json'): - with open('../../../../tools/generate_html/daily_dev.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) -elif os.path.exists('../../../../tools/generate_html/daily.json'): - with open('../../../../tools/generate_html/daily.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) - -if os.getenv("RD_PATH").split('/')[-1]: - copy_repo = os.getenv("RD_PATH").split('/')[-1] -else: - copy_repo = os.getenv("RD_PATH").split('/')[-2] - -branch = [version_inf[i]['branch'] for i in range(len(version_inf)) - if version_inf[i]['name'] == copy_repo.replace('mindrec', 'recommender')][0] -docs_branch = [version_inf[i]['branch'] for i in range(len(version_inf)) if version_inf[i]['name'] == 'tutorials'][0] - -re_view = f"\n.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/{docs_branch}/" + \ - f"resource/_static/logo_github_source.svg\n :target: https://github.com/mindspore-lab/{copy_repo}/blob/{branch}/" - -for cur, _, files in os.walk(present_path): - for i in files: - flag_copy = 0 - if i.endswith('.rst'): - for j in copy_list: - if j in cur: - flag_copy = 1 - break - if os.path.join(cur, i) in copy_list or flag_copy: - try: - with open(os.path.join(cur, i), 'r+', encoding='utf-8') as f: - content = f.read() - new_content = content - if '.. include::' in content and '.. automodule::' in content: - continue - if 'autosummary::' not in content and "\n=====" in content: - re_view_ = re_view + copy_path + cur.split(present_path)[-1] + '/' + i + \ - '\n :alt: 查看源文件\n\n' - new_content = re.sub('([=]{5,})\n', r'\1\n' + re_view_, content, 1) - if new_content != content: - f.seek(0) - f.truncate() - f.write(new_content) - except Exception: - print(f'打开{i}文件失败') - -import mindspore_rec - -sys.path.append(os.path.abspath('../../../../resource/sphinx_ext')) -# import anchor_mod -import nbsphinx_mod - -sys.path.append(os.path.abspath('../../../../resource/search')) -import search_code - -sys.path.append(os.path.abspath('../../../../resource/custom_directives')) -from custom_directives import IncludeCodeDirective - -def setup(app): - app.add_directive('includecode', IncludeCodeDirective) - -try: - src_release = os.path.join(os.getenv("RD_PATH"), 'RELEASE_CN.md') - des_release = "./RELEASE.md" - with open(src_release, "r", encoding="utf-8") as f: - data = f.read() - if len(re.findall("\n## (.*?)\n",data)) > 1: - content = re.findall("(## [\s\S\n]*?)\n## ", data) - else: - content = re.findall("(## [\s\S\n]*)", data) - # result = content[0].replace('# MindSpore', '#', 1) - with open(des_release, "w", encoding="utf-8") as p: - p.write("# Release Notes"+"\n\n") - p.write(content[0]) -except Exception as e: - print('release文件拷贝失败,原因是:',e) \ No newline at end of file diff --git a/docs/recommender/docs/source_zh_cn/images/architecture.png b/docs/recommender/docs/source_zh_cn/images/architecture.png deleted file mode 100644 index ea84b417d855562fd4696482e6dd91cc88338712..0000000000000000000000000000000000000000 Binary files a/docs/recommender/docs/source_zh_cn/images/architecture.png and /dev/null differ diff --git a/docs/recommender/docs/source_zh_cn/images/offline_training.png b/docs/recommender/docs/source_zh_cn/images/offline_training.png deleted file mode 100644 index 41eac8a2105a981227866823e209cd04e8ccb391..0000000000000000000000000000000000000000 Binary files a/docs/recommender/docs/source_zh_cn/images/offline_training.png and /dev/null differ diff --git a/docs/recommender/docs/source_zh_cn/images/online_training.png b/docs/recommender/docs/source_zh_cn/images/online_training.png deleted file mode 100644 index 230b248e36abb4db7acf1a7d42524ccf1a03a0da..0000000000000000000000000000000000000000 Binary files a/docs/recommender/docs/source_zh_cn/images/online_training.png and /dev/null differ diff --git a/docs/recommender/docs/source_zh_cn/index.rst b/docs/recommender/docs/source_zh_cn/index.rst deleted file mode 100644 index 8f4fc937b750ae8da70d82f5fe88029d79bc6f60..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/source_zh_cn/index.rst +++ /dev/null @@ -1,45 +0,0 @@ -MindSpore Recommender 文档 -============================= - -MindSpore Recommender是一个构建在MindSpore框架基础上,面向推荐领域的开源训练加速库,通过MindSpore大规模的异构计算加速能力,MindSpore Recommender支持在线以及离线场景大规模动态特征的高效训练。 - -.. raw:: html - -

    - -MindSpore Recommender加速库由如下部分组成: - -- 在线训练:通过流式读取实时数据源中的数据 (例如:Kafka),以及在线的实时数据加工,实现实时数据的在线训练以及增量模型更新,从而支持对于模型有实时更新需要的业务场景; -- 离线训练:面向传统的离线数据集训练场景,通过自动并行、分布式特征缓存、异构加速等技术方案,支持包含大规模特征向量的推荐模型训练; -- 数据处理:MindSpore Pandas和MindData提供了在离线数据的读取和处理能力,通过全Python的表达支持,节省了多语言和多框架开销,同时打通了数据处理和模型训练的高效数据流转链路; -- 模型库:包含持续丰富的典型推荐模型训练,经过严格的精度和性能验证,支持开箱即用。 - -代码仓地址: - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: 安装部署 - - install - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: 使用指南 - - offline_learning - online_learning - -.. toctree:: - :maxdepth: 1 - :caption: API参考 - - recommender - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: RELEASE NOTES - - RELEASE diff --git a/docs/recommender/docs/source_zh_cn/install.md b/docs/recommender/docs/source_zh_cn/install.md deleted file mode 100644 index 09c6c3261ee2f12b340e29686cf8c188420fafda..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/source_zh_cn/install.md +++ /dev/null @@ -1,36 +0,0 @@ -# 安装MindSpore Recommender - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/recommender/docs/source_zh_cn/install.md) - -MindSpore Recommender依赖MindSpore训练框架,安装完[MindSpore](https://gitee.com/mindspore/mindspore#安装),再安装MindSpore Recommender。可以采用pip安装或者源码编译安装两种方式。 - -## pip安装 - -使用pip命令安装,请从[MindSpore Recommender下载页面](https://www.mindspore.cn/versions)下载并安装whl包。 - -```shell -pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{ms_version}/Recommender/any/mindspore_rec-{mr_version}-py3-none-any.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple -``` - -- 在联网状态下,安装whl包时会自动下载MindSpore Recommender安装包的依赖项(依赖项详情参见requirement.txt),其余情况需自行安装。 -- `{ms_version}`表示与MindSpore Recommender匹配的MindSpore版本号。 -- `{mr_version}`表示MindSpore Recommender版本号,例如下载0.2.0版本MindSpore Recommender时,`{mr_version}`应写为0.2.0。 - -## 源码编译安装 - -下载[源码](https://github.com/mindspore-lab/mindrec),下载后进入`mindrec`目录。 - -```shell -bash build.sh -pip install output/mindspore_rec-0.2.0-py3-none-any.whl -``` - -其中,`build.sh`为`recommender`目录下的编译脚本文件。 - -## 验证安装是否成功 - -执行以下命令,验证安装结果。导入Python模块不报错即安装成功: - -```python -import mindspore_rec -``` diff --git a/docs/recommender/docs/source_zh_cn/offline_learning.md b/docs/recommender/docs/source_zh_cn/offline_learning.md deleted file mode 100644 index e63e4a6e1372586492e4afec85d28ffa021cf2e0..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/source_zh_cn/offline_learning.md +++ /dev/null @@ -1,17 +0,0 @@ -# 离线训练 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/recommender/docs/source_zh_cn/offline_learning.md) - -## 概述 - -推荐模型训练的主要挑战之一是对于大规模特征向量的存储与训练,MindSpore Recommender为离线场景的大规模特征向量训练提供了完善的解决方案。 - -## 整体架构 - -针对推荐模型中大规模特征向量的训练架构如下图所示,其中核心采用了分布式多级Embedding Cache的技术方案,同时基于模型并行的多机多卡分布式并行技术,实现了大规模低成本的推荐大模型训练。 - -![image.png](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/recommender/docs/source_zh_cn/images/offline_training.png) - -## 使用样例 - -[Wide&Deep 分布式训练](https://github.com/mindspore-lab/mindrec/tree/master/models/wide_deep) diff --git a/docs/recommender/docs/source_zh_cn/online_learning.md b/docs/recommender/docs/source_zh_cn/online_learning.md deleted file mode 100644 index 234279665ec84186aa5e16f9307ced3ac010448e..0000000000000000000000000000000000000000 --- a/docs/recommender/docs/source_zh_cn/online_learning.md +++ /dev/null @@ -1,213 +0,0 @@ -# 在线学习 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/recommender/docs/source_zh_cn/online_learning.md) - -## 概述 - -推荐网络模型更新的实时性是重要的技术指标之一,在线学习可有效提升推荐网络模型更新的实时性。 - -在线学习与离线训练的主要区别: - -1. 在线学习的数据集为流式数据,无确定的dataset size、epoch,离线训练的数据集有确定的dataset size、epoch。 -2. 在线学习为常驻服务形式,离线训练结束后任务退出。 -3. 在线学习需要收集并存储训练数据,收集到固定数量的数据或经过固定的时间窗口后驱动训练流程。 - -## 整体架构 - -用户的流式训练数据推送到Kafka中,MindSpore Pandas从Kafka读取数据并进行特征工程转换,然后写入特征存储引擎中,MindData从存储引擎中读取数据作为训练数据进行训练,MindSpore作为服务常驻,持续接收数据并执行训练,整体流程如下图所示: - -![image.png](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/recommender/docs/source_zh_cn/images/online_training.png) - -## 使用约束 - -- 需要安装Python3.8及以上版本。 -- 目前仅支持GPU训练、Linux操作系统。 - -## Python包依赖 - -mindpandas v0.1.0 - -mindspore_rec v0.2.0 - -kafka-python v2.0.2 - -## 使用样例 - -下面以Criteo数据集训练Wide&Deep为例,介绍一下在线学习的流程,样例代码位于[在线学习](https://github.com/mindspore-lab/mindrec/tree/master/examples/online_learning)。 - -MindSpore Recommender为在线学习提供了专门的算法模型`RecModel`,搭配实时数据源Kafka数据读取与特征处理的MindSpore Pandas即可实现一个简单的在线学习流程。 -首先自定义一个实时数据处理的数据集,其中的构造函数参数`receiver`是MindPands中的`DataReceiver`类型,用于接收实时数据,`__getitem__`表示一次读取一条数据。 - -```python -class StreamingDataset: - def __init__(self, receiver): - self.data_ = [] - self.receiver_ = receiver - - def __getitem__(self, item): - while not self.data_: - data = self.receiver_.recv() - if data is not None: - self.data_ = data.tolist() - - last_row = self.data_.pop() - return np.array(last_row[0], dtype=np.int32), np.array(last_row[1], dtype=np.float32), np.array(last_row[2], dtype=np.float32) -``` - -接着将上述自定义数据集封装成`RecModel`所需要的在线数据集。 - -```python -from mindpandas.channel import DataReceiver -from mindspore_rec import RecModel as Model - -receiver = DataReceiver(address=config.address, namespace=config.namespace, - dataset_name=config.dataset_name, shard_id=0) -stream_dataset = StreamingDataset(receiver) - -dataset = ds.GeneratorDataset(stream_dataset, column_names=["id", "weight", "label"]) -dataset = dataset.batch(config.batch_size) - -train_net, _ = GetWideDeepNet(config) -train_net.set_train() - -model = Model(train_net) -``` - -在配置好模型Checkpoint的导出策略后,启动在线训练进程。 - -```python -ckptconfig = CheckpointConfig(save_checkpoint_steps=100, keep_checkpoint_max=5) -ckpoint_cb = ModelCheckpoint(prefix='widedeep_train', directory="./ckpt", config=ckptconfig) - -model.online_train(dataset, callbacks=[TimeMonitor(1), callback, ckpoint_cb], dataset_sink_mode=True) -``` - -下面介绍在线学习流程中涉及各个模块的启动流程: - -### 下载Kafka - -```bash -wget https://archive.apache.org/dist/kafka/3.2.0/kafka_2.13-3.2.0.tgz -tar -xzf kafka_2.13-3.2.0.tgz -cd kafka_2.13-3.2.0 -``` - -如需安装其他版本,请参照。 - -### 启动kafka-zookeeper - -```bash -bin/zookeeper-server-start.sh config/zookeeper.properties -``` - -### 启动kafka-server - -打开另一个命令终端,启动kafka服务。 - -```bash -bin/kafka-server-start.sh config/server.properties -``` - -### 启动kafka_client - -进入recommender仓在线学习样例目录,启动kafka_client。kafka_client只需要启动一次,可以使用kafka设置topic对应的partition数量。 - -```bash -cd recommender/examples/online_learning -python kafka_client.py -``` - -### 启动分布式计算引擎 - -```bash -yrctl start --master --address $MASTER_HOST_IP - -# 参数说明 -# --master: 表示当前host为master节点,非master节点不用指定‘--master’参数 -# --address: master节点的ip -``` - -### 启动数据producer - -producer用于模拟在线学习场景,将本地的criteo数据集写入到Kafka,供consumer使用。当前样例使用多进程读取两个文件,并将数据写入Kafka。 - -```bash -python producer.py --file1=$CRITEO_DATASET_FILE_PATH --file2=$CRITEO_DATASET_FILE_PATH - -# 参数说明 -# --file1: criteo数据集在本地磁盘的存放路径 -# --file2: criteo数据集在本地磁盘的存放路径 -# 上述文件均为criteo原始数据集文本文件,file1和file2可以被并发处理,file1和file2可以相同也可以不同,如果相同则相当于文件中每个样本被使用两次。 -``` - -### 启动数据consumer - -```bash -python consumer.py --num_shards=$DEVICE_NUM --address=$LOCAL_HOST_IP --dataset_name=$DATASET_NAME - --max_dict=$PATH_TO_VAL_MAX_DICT --min_dict=$PATH_TO_VAL_MIN_DICT --map_dict=$PATH_TO_CAT_TO_ID_DICT - -# 参数说明 -# --num_shards: 对应训练侧的device 卡数,单卡训练则设置为1,8卡训练设置为8 -# --address: 当前sender的地址 -# --dataset_name: 数据集名称 -# --namespace: channel名称 -# --max_dict: 稠密特征列的最大值字典 -# --min_dict: 稠密特征列的最小值字典 -# --map_dict: 稀疏特征列的字典 -``` - -consumer为criteo数据集进行特征工程需要3个数据集相关文件:`all_val_max_dict.pkl`、`all_val_min_dict.pkl`和`cat2id_dict.pkl`。`$PATH_TO_VAL_MAX_DICT`、`$PATH_TO_VAL_MIN_DICT`和`$PATH_TO_CAT_TO_ID_DICT` 分别为这些文件在环境上的绝对路径。这3个pkl文件具体生产方法可以参考[process_data.py](https://github.com/mindspore-lab/mindrec/blob/master/datasets/criteo_1tb/process_data.py),对原始criteo数据集做转换生成对应的.pkl文件。 - -### 启动在线训练 - -config采用yaml的形式,见[default_config.yaml](https://github.com/mindspore-lab/mindrec/blob/master/examples/online_learning/default_config.yaml)。 - -单卡训练: - -```bash -python online_train.py --address=$LOCAL_HOST_IP --dataset_name=criteo - -# 参数说明: -# --address: 本机host ip,从MindSpore Pandas接收训练数据需要配置 -# --dataset_name: 数据集名字,和consumer模块保持一致 -``` - -多卡训练MPI方式启动: - -```bash -bash mpirun_dist_online_train.sh [$RANK_SIZE] [$LOCAL_HOST_IP] - -# 参数说明: -# RANK_SIZE:多卡训练卡数量 -# LOCAL_HOST_IP:本机host ip,用于MindSpore Pandas接收训练数据 -``` - -动态组网方式启动多卡训练: - -```bash -bash run_dist_online_train.sh [$WORKER_NUM] [$SHED_HOST] [$SCHED_PORT] [$LOCAL_HOST_IP] - -# 参数说明: -# WORKER_NUM:多卡训练卡数量 -# SHED_HOST:MindSpore动态组网需要的Scheduler 角色的IP -# SCHED_PORT:MindSpore动态组网需要的Scheduler 角色的Port -# LOCAL_HOST_IP:本机host ip,从MindSpore Pandas接收训练数据需要配置 -``` - -成功启动训练后,会输出如下日志: - -其中epoch和step表示当前训练步骤对应的epoch和step数,wide_loss和deep_loss表示wide&deep网络中的训练loss值。 - -```text -epoch: 1, step: 1, wide_loss: 0.66100323, deep_loss: 0.72502613 -epoch: 1, step: 2, wide_loss: 0.46781272, deep_loss: 0.5293098 -epoch: 1, step: 3, wide_loss: 0.363207, deep_loss: 0.42204413 -epoch: 1, step: 4, wide_loss: 0.3051032, deep_loss: 0.36126155 -epoch: 1, step: 5, wide_loss: 0.24045062, deep_loss: 0.29395688 -epoch: 1, step: 6, wide_loss: 0.24296054, deep_loss: 0.29386574 -epoch: 1, step: 7, wide_loss: 0.20943595, deep_loss: 0.25780612 -epoch: 1, step: 8, wide_loss: 0.19562452, deep_loss: 0.24153553 -epoch: 1, step: 9, wide_loss: 0.16500896, deep_loss: 0.20854339 -epoch: 1, step: 10, wide_loss: 0.2188702, deep_loss: 0.26011512 -epoch: 1, step: 11, wide_loss: 0.14963374, deep_loss: 0.18867904 -``` diff --git a/docs/reinforcement/docs/Makefile b/docs/reinforcement/docs/Makefile deleted file mode 100644 index 1eff8952707bdfa503c8d60c1e9a903053170ba2..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/Makefile +++ /dev/null @@ -1,20 +0,0 @@ -# Minimal makefile for Sphinx documentation -# - -# You can set these variables from the command line, and also -# from the environment for the first two. -SPHINXOPTS ?= -SPHINXBUILD ?= sphinx-build -SOURCEDIR = source_zh_cn -BUILDDIR = build_zh_cn - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/docs/reinforcement/docs/_ext/customdocumenter.txt b/docs/reinforcement/docs/_ext/customdocumenter.txt deleted file mode 100644 index 2d37ae41f6772a21da2a7dc5c7bff75128e68330..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/_ext/customdocumenter.txt +++ /dev/null @@ -1,245 +0,0 @@ -import re -import os -from sphinx.ext.autodoc import Documenter - - -class CustomDocumenter(Documenter): - - def document_members(self, all_members: bool = False) -> None: - """Generate reST for member documentation. - - If *all_members* is True, do all members, else those given by - *self.options.members*. - """ - # set current namespace for finding members - self.env.temp_data['autodoc:module'] = self.modname - if self.objpath: - self.env.temp_data['autodoc:class'] = self.objpath[0] - - want_all = all_members or self.options.inherited_members or \ - self.options.members is ALL - # find out which members are documentable - members_check_module, members = self.get_object_members(want_all) - - # **** 排除已写中文接口名 **** - file_path = os.path.join(self.env.app.srcdir, self.env.docname+'.rst') - exclude_re = re.compile(r'(.. py:class::|.. py:function::)\s+(.*?)(\(|\n)') - includerst_re = re.compile(r'.. include::\s+(.*?)\n') - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - excluded_members = exclude_re.findall(content) - if excluded_members: - excluded_members = [i[1].split('.')[-1] for i in excluded_members] - rst_included = includerst_re.findall(content) - if rst_included: - for i in rst_included: - include_path = os.path.join(os.path.dirname(file_path), i) - if os.path.exists(include_path): - with open(include_path, 'r', encoding='utf8') as g: - content_ = g.read() - excluded_member_ = exclude_re.findall(content_) - if excluded_member_: - excluded_member_ = [j[1].split('.')[-1] for j in excluded_member_] - excluded_members.extend(excluded_member_) - - if excluded_members: - if self.options.exclude_members: - self.options.exclude_members |= set(excluded_members) - else: - self.options.exclude_members = excluded_members - - # remove members given by exclude-members - if self.options.exclude_members: - members = [ - (membername, member) for (membername, member) in members - if ( - self.options.exclude_members is ALL or - membername not in self.options.exclude_members - ) - ] - - # document non-skipped members - memberdocumenters = [] # type: List[Tuple[Documenter, bool]] - for (mname, member, isattr) in self.filter_members(members, want_all): - classes = [cls for cls in self.documenters.values() - if cls.can_document_member(member, mname, isattr, self)] - if not classes: - # don't know how to document this member - continue - # prefer the documenter with the highest priority - classes.sort(key=lambda cls: cls.priority) - # give explicitly separated module name, so that members - # of inner classes can be documented - full_mname = self.modname + '::' + \ - '.'.join(self.objpath + [mname]) - documenter = classes[-1](self.directive, full_mname, self.indent) - memberdocumenters.append((documenter, isattr)) - member_order = self.options.member_order or \ - self.env.config.autodoc_member_order - if member_order == 'groupwise': - # sort by group; relies on stable sort to keep items in the - # same group sorted alphabetically - memberdocumenters.sort(key=lambda e: e[0].member_order) - elif member_order == 'bysource' and self.analyzer: - # sort by source order, by virtue of the module analyzer - tagorder = self.analyzer.tagorder - - def keyfunc(entry: Tuple[Documenter, bool]) -> int: - fullname = entry[0].name.split('::')[1] - return tagorder.get(fullname, len(tagorder)) - memberdocumenters.sort(key=keyfunc) - - for documenter, isattr in memberdocumenters: - documenter.generate( - all_members=True, real_modname=self.real_modname, - check_module=members_check_module and not isattr) - - # reset current objects - self.env.temp_data['autodoc:module'] = None - self.env.temp_data['autodoc:class'] = None - - def generate(self, more_content: Any = None, real_modname: str = None, - check_module: bool = False, all_members: bool = False) -> None: - """Generate reST for the object given by *self.name*, and possibly for - its members. - - If *more_content* is given, include that content. If *real_modname* is - given, use that module name to find attribute docs. If *check_module* is - True, only generate if the object is defined in the module name it is - imported from. If *all_members* is True, document all members. - """ - if not self.parse_name(): - # need a module to import - logger.warning( - __('don\'t know which module to import for autodocumenting ' - '%r (try placing a "module" or "currentmodule" directive ' - 'in the document, or giving an explicit module name)') % - self.name, type='autodoc') - return - - # now, import the module and get object to document - if not self.import_object(): - return - - # If there is no real module defined, figure out which to use. - # The real module is used in the module analyzer to look up the module - # where the attribute documentation would actually be found in. - # This is used for situations where you have a module that collects the - # functions and classes of internal submodules. - self.real_modname = real_modname or self.get_real_modname() # type: str - - # try to also get a source code analyzer for attribute docs - try: - self.analyzer = ModuleAnalyzer.for_module(self.real_modname) - # parse right now, to get PycodeErrors on parsing (results will - # be cached anyway) - self.analyzer.find_attr_docs() - except PycodeError as err: - logger.debug('[autodoc] module analyzer failed: %s', err) - # no source file -- e.g. for builtin and C modules - self.analyzer = None - # at least add the module.__file__ as a dependency - if hasattr(self.module, '__file__') and self.module.__file__: - self.directive.filename_set.add(self.module.__file__) - else: - self.directive.filename_set.add(self.analyzer.srcname) - - # check __module__ of object (for members not given explicitly) - if check_module: - if not self.check_module(): - return - - # document members, if possible - self.document_members(all_members) - - -class ModuleDocumenter(CustomDocumenter): - """ - Specialized Documenter subclass for modules. - """ - objtype = 'module' - content_indent = '' - titles_allowed = True - - option_spec = { - 'members': members_option, 'undoc-members': bool_option, - 'noindex': bool_option, 'inherited-members': bool_option, - 'show-inheritance': bool_option, 'synopsis': identity, - 'platform': identity, 'deprecated': bool_option, - 'member-order': identity, 'exclude-members': members_set_option, - 'private-members': bool_option, 'special-members': members_option, - 'imported-members': bool_option, 'ignore-module-all': bool_option - } # type: Dict[str, Callable] - - def __init__(self, *args: Any) -> None: - super().__init__(*args) - merge_members_option(self.options) - - @classmethod - def can_document_member(cls, member: Any, membername: str, isattr: bool, parent: Any - ) -> bool: - # don't document submodules automatically - return False - - def resolve_name(self, modname: str, parents: Any, path: str, base: Any - ) -> Tuple[str, List[str]]: - if modname is not None: - logger.warning(__('"::" in automodule name doesn\'t make sense'), - type='autodoc') - return (path or '') + base, [] - - def parse_name(self) -> bool: - ret = super().parse_name() - if self.args or self.retann: - logger.warning(__('signature arguments or return annotation ' - 'given for automodule %s') % self.fullname, - type='autodoc') - return ret - - def add_directive_header(self, sig: str) -> None: - Documenter.add_directive_header(self, sig) - - sourcename = self.get_sourcename() - - # add some module-specific options - if self.options.synopsis: - self.add_line(' :synopsis: ' + self.options.synopsis, sourcename) - if self.options.platform: - self.add_line(' :platform: ' + self.options.platform, sourcename) - if self.options.deprecated: - self.add_line(' :deprecated:', sourcename) - - def get_object_members(self, want_all: bool) -> Tuple[bool, List[Tuple[str, object]]]: - if want_all: - if (self.options.ignore_module_all or not - hasattr(self.object, '__all__')): - # for implicit module members, check __module__ to avoid - # documenting imported objects - return True, get_module_members(self.object) - else: - memberlist = self.object.__all__ - # Sometimes __all__ is broken... - if not isinstance(memberlist, (list, tuple)) or not \ - all(isinstance(entry, str) for entry in memberlist): - logger.warning( - __('__all__ should be a list of strings, not %r ' - '(in module %s) -- ignoring __all__') % - (memberlist, self.fullname), - type='autodoc' - ) - # fall back to all members - return True, get_module_members(self.object) - else: - memberlist = self.options.members or [] - ret = [] - for mname in memberlist: - try: - ret.append((mname, safe_getattr(self.object, mname))) - except AttributeError: - logger.warning( - __('missing attribute mentioned in :members: or __all__: ' - 'module %s, attribute %s') % - (safe_getattr(self.object, '__name__', '???'), mname), - type='autodoc' - ) - return False, ret diff --git a/docs/reinforcement/docs/_ext/overwriteobjectiondirective.txt b/docs/reinforcement/docs/_ext/overwriteobjectiondirective.txt deleted file mode 100644 index e7ffdfe09a737771ead4a9c2ce1d0b945bb49947..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/_ext/overwriteobjectiondirective.txt +++ /dev/null @@ -1,374 +0,0 @@ -""" - sphinx.directives - ~~~~~~~~~~~~~~~~~ - - Handlers for additional ReST directives. - - :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re -import inspect -import importlib -from functools import reduce -from typing import TYPE_CHECKING, Any, Dict, Generic, List, Tuple, TypeVar, cast - -from docutils import nodes -from docutils.nodes import Node -from docutils.parsers.rst import directives, roles - -from sphinx import addnodes -from sphinx.addnodes import desc_signature -from sphinx.deprecation import RemovedInSphinx50Warning, deprecated_alias -from sphinx.util import docutils, logging -from sphinx.util.docfields import DocFieldTransformer, Field, TypedField -from sphinx.util.docutils import SphinxDirective -from sphinx.util.typing import OptionSpec - -if TYPE_CHECKING: - from sphinx.application import Sphinx - - -# RE to strip backslash escapes -nl_escape_re = re.compile(r'\\\n') -strip_backslash_re = re.compile(r'\\(.)') - -T = TypeVar('T') -logger = logging.getLogger(__name__) - -def optional_int(argument: str) -> int: - """ - Check for an integer argument or None value; raise ``ValueError`` if not. - """ - if argument is None: - return None - else: - value = int(argument) - if value < 0: - raise ValueError('negative value; must be positive or zero') - return value - -def get_api(fullname): - """ - 获取接口对象。 - - :param fullname: 接口名全称 - :return: 属性对象或None(如果不存在) - """ - main_module = fullname.split('.')[0] - main_import = importlib.import_module(main_module) - - try: - return reduce(getattr, fullname.split('.')[1:], main_import) - except AttributeError: - return None - -def get_example(name: str): - try: - api_doc = inspect.getdoc(get_api(name)) - example_str = re.findall(r'Examples:\n([\w\W]*?)(\n\n|$)', api_doc) - if not example_str: - return [] - example_str = re.sub(r'\n\s+', r'\n', example_str[0][0]) - example_str = example_str.strip() - example_list = example_str.split('\n') - return ["", "**样例:**", ""] + example_list + [""] - except: - return [] - -def get_platforms(name: str): - try: - api_doc = inspect.getdoc(get_api(name)) - example_str = re.findall(r'Supported Platforms:\n\s+(.*?)\n\n', api_doc) - if not example_str: - example_str_leak = re.findall(r'Supported Platforms:\n\s+(.*)', api_doc) - if example_str_leak: - example_str = example_str_leak[0].strip() - example_list = example_str.split('\n') - example_list = [' ' + example_list[0]] - return ["", "支持平台:"] + example_list + [""] - return [] - example_str = example_str[0].strip() - example_list = example_str.split('\n') - example_list = [' ' + example_list[0]] - return ["", "支持平台:"] + example_list + [""] - except: - return [] - -class ObjectDescription(SphinxDirective, Generic[T]): - """ - Directive to describe a class, function or similar object. Not used - directly, but subclassed (in domain-specific directives) to add custom - behavior. - """ - - has_content = True - required_arguments = 1 - optional_arguments = 0 - final_argument_whitespace = True - option_spec: OptionSpec = { - 'noindex': directives.flag, - } # type: Dict[str, DirectiveOption] - - # types of doc fields that this directive handles, see sphinx.util.docfields - doc_field_types: List[Field] = [] - domain: str = None - objtype: str = None - indexnode: addnodes.index = None - - # Warning: this might be removed in future version. Don't touch this from extensions. - _doc_field_type_map = {} # type: Dict[str, Tuple[Field, bool]] - - def get_field_type_map(self) -> Dict[str, Tuple[Field, bool]]: - if self._doc_field_type_map == {}: - self._doc_field_type_map = {} - for field in self.doc_field_types: - for name in field.names: - self._doc_field_type_map[name] = (field, False) - - if field.is_typed: - typed_field = cast(TypedField, field) - for name in typed_field.typenames: - self._doc_field_type_map[name] = (field, True) - - return self._doc_field_type_map - - def get_signatures(self) -> List[str]: - """ - Retrieve the signatures to document from the directive arguments. By - default, signatures are given as arguments, one per line. - - Backslash-escaping of newlines is supported. - """ - lines = nl_escape_re.sub('', self.arguments[0]).split('\n') - if self.config.strip_signature_backslash: - # remove backslashes to support (dummy) escapes; helps Vim highlighting - return [strip_backslash_re.sub(r'\1', line.strip()) for line in lines] - else: - return [line.strip() for line in lines] - - def handle_signature(self, sig: str, signode: desc_signature) -> Any: - """ - Parse the signature *sig* into individual nodes and append them to - *signode*. If ValueError is raised, parsing is aborted and the whole - *sig* is put into a single desc_name node. - - The return value should be a value that identifies the object. It is - passed to :meth:`add_target_and_index()` unchanged, and otherwise only - used to skip duplicates. - """ - raise ValueError - - def add_target_and_index(self, name: Any, sig: str, signode: desc_signature) -> None: - """ - Add cross-reference IDs and entries to self.indexnode, if applicable. - - *name* is whatever :meth:`handle_signature()` returned. - """ - return # do nothing by default - - def before_content(self) -> None: - """ - Called before parsing content. Used to set information about the current - directive context on the build environment. - """ - pass - - def transform_content(self, contentnode: addnodes.desc_content) -> None: - """ - Called after creating the content through nested parsing, - but before the ``object-description-transform`` event is emitted, - and before the info-fields are transformed. - Can be used to manipulate the content. - """ - pass - - def after_content(self) -> None: - """ - Called after parsing content. Used to reset information about the - current directive context on the build environment. - """ - pass - - def check_class_end(self, content): - for i in content: - if not i.startswith('.. include::') and i != "\n" and i != "": - return False - return True - - def extend_items(self, rst_file, start_num, num): - ls = [] - for i in range(1, num+1): - ls.append((rst_file, start_num+i)) - return ls - - def run(self) -> List[Node]: - """ - Main directive entry function, called by docutils upon encountering the - directive. - - This directive is meant to be quite easily subclassable, so it delegates - to several additional methods. What it does: - - * find out if called as a domain-specific directive, set self.domain - * create a `desc` node to fit all description inside - * parse standard options, currently `noindex` - * create an index node if needed as self.indexnode - * parse all given signatures (as returned by self.get_signatures()) - using self.handle_signature(), which should either return a name - or raise ValueError - * add index entries using self.add_target_and_index() - * parse the content and handle doc fields in it - """ - if ':' in self.name: - self.domain, self.objtype = self.name.split(':', 1) - else: - self.domain, self.objtype = '', self.name - self.indexnode = addnodes.index(entries=[]) - - node = addnodes.desc() - node.document = self.state.document - node['domain'] = self.domain - # 'desctype' is a backwards compatible attribute - node['objtype'] = node['desctype'] = self.objtype - node['noindex'] = noindex = ('noindex' in self.options) - if self.domain: - node['classes'].append(self.domain) - node['classes'].append(node['objtype']) - - self.names: List[T] = [] - signatures = self.get_signatures() - for sig in signatures: - # add a signature node for each signature in the current unit - # and add a reference target for it - signode = addnodes.desc_signature(sig, '') - self.set_source_info(signode) - node.append(signode) - try: - # name can also be a tuple, e.g. (classname, objname); - # this is strictly domain-specific (i.e. no assumptions may - # be made in this base class) - name = self.handle_signature(sig, signode) - except ValueError: - # signature parsing failed - signode.clear() - signode += addnodes.desc_name(sig, sig) - continue # we don't want an index entry here - if name not in self.names: - self.names.append(name) - if not noindex: - # only add target and index entry if this is the first - # description of the object with this name in this desc block - self.add_target_and_index(name, sig, signode) - - contentnode = addnodes.desc_content() - node.append(contentnode) - if self.names: - # needed for association of version{added,changed} directives - self.env.temp_data['object'] = self.names[0] - self.before_content() - try: - example = get_example(self.names[0][0]) - platforms = get_platforms(self.names[0][0]) - except Exception as e: - example = '' - platforms = '' - logger.warning(f'Error API names in {self.arguments[0]}.') - logger.warning(f'{e}') - extra = platforms + example - if extra: - if self.objtype == "method": - self.content.data.extend(extra) - else: - index_num = 0 - for num, i in enumerate(self.content.data): - if i.startswith('.. py:method::') or self.check_class_end(self.content.data[num:]): - index_num = num - break - if index_num: - count = len(self.content.data) - for i in extra: - self.content.data.insert(index_num-count, i) - else: - self.content.data.extend(extra) - try: - self.content.items.extend(self.extend_items(self.content.items[0][0], self.content.items[-1][1], len(extra))) - except Exception as e: - logger.warning(f'{e}') - self.state.nested_parse(self.content, self.content_offset, contentnode) - self.transform_content(contentnode) - self.env.app.emit('object-description-transform', - self.domain, self.objtype, contentnode) - DocFieldTransformer(self).transform_all(contentnode) - self.env.temp_data['object'] = None - self.after_content() - return [self.indexnode, node] - - -class DefaultRole(SphinxDirective): - """ - Set the default interpreted text role. Overridden from docutils. - """ - - optional_arguments = 1 - final_argument_whitespace = False - - def run(self) -> List[Node]: - if not self.arguments: - docutils.unregister_role('') - return [] - role_name = self.arguments[0] - role, messages = roles.role(role_name, self.state_machine.language, - self.lineno, self.state.reporter) - if role: - docutils.register_role('', role) - self.env.temp_data['default_role'] = role_name - else: - literal_block = nodes.literal_block(self.block_text, self.block_text) - reporter = self.state.reporter - error = reporter.error('Unknown interpreted text role "%s".' % role_name, - literal_block, line=self.lineno) - messages += [error] - - return cast(List[nodes.Node], messages) - - -class DefaultDomain(SphinxDirective): - """ - Directive to (re-)set the default domain for this source file. - """ - - has_content = False - required_arguments = 1 - optional_arguments = 0 - final_argument_whitespace = False - option_spec = {} # type: Dict - - def run(self) -> List[Node]: - domain_name = self.arguments[0].lower() - # if domain_name not in env.domains: - # # try searching by label - # for domain in env.domains.values(): - # if domain.label.lower() == domain_name: - # domain_name = domain.name - # break - self.env.temp_data['default_domain'] = self.env.domains.get(domain_name) - return [] - -def setup(app: "Sphinx") -> Dict[str, Any]: - app.add_config_value("strip_signature_backslash", False, 'env') - directives.register_directive('default-role', DefaultRole) - directives.register_directive('default-domain', DefaultDomain) - directives.register_directive('describe', ObjectDescription) - # new, more consistent, name - directives.register_directive('object', ObjectDescription) - - app.add_event('object-description-transform') - - return { - 'version': 'builtin', - 'parallel_read_safe': True, - 'parallel_write_safe': True, - } - diff --git a/docs/reinforcement/docs/_ext/overwriteviewcode.txt b/docs/reinforcement/docs/_ext/overwriteviewcode.txt deleted file mode 100644 index 172780ec56b3ed90e7b0add617257a618cf38ee0..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/_ext/overwriteviewcode.txt +++ /dev/null @@ -1,378 +0,0 @@ -""" - sphinx.ext.viewcode - ~~~~~~~~~~~~~~~~~~~ - - Add links to module code in Python object descriptions. - - :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import posixpath -import traceback -import warnings -from os import path -from typing import Any, Dict, Generator, Iterable, Optional, Set, Tuple, cast - -from docutils import nodes -from docutils.nodes import Element, Node - -import sphinx -from sphinx import addnodes -from sphinx.application import Sphinx -from sphinx.builders import Builder -from sphinx.builders.html import StandaloneHTMLBuilder -from sphinx.deprecation import RemovedInSphinx50Warning -from sphinx.environment import BuildEnvironment -from sphinx.locale import _, __ -from sphinx.pycode import ModuleAnalyzer -from sphinx.transforms.post_transforms import SphinxPostTransform -from sphinx.util import get_full_modname, logging, status_iterator -from sphinx.util.nodes import make_refnode - - -logger = logging.getLogger(__name__) - - -OUTPUT_DIRNAME = '_modules' - - -class viewcode_anchor(Element): - """Node for viewcode anchors. - - This node will be processed in the resolving phase. - For viewcode supported builders, they will be all converted to the anchors. - For not supported builders, they will be removed. - """ - - -def _get_full_modname(app: Sphinx, modname: str, attribute: str) -> Optional[str]: - try: - return get_full_modname(modname, attribute) - except AttributeError: - # sphinx.ext.viewcode can't follow class instance attribute - # then AttributeError logging output only verbose mode. - logger.verbose('Didn\'t find %s in %s', attribute, modname) - return None - except Exception as e: - # sphinx.ext.viewcode follow python domain directives. - # because of that, if there are no real modules exists that specified - # by py:function or other directives, viewcode emits a lot of warnings. - # It should be displayed only verbose mode. - logger.verbose(traceback.format_exc().rstrip()) - logger.verbose('viewcode can\'t import %s, failed with error "%s"', modname, e) - return None - - -def is_supported_builder(builder: Builder) -> bool: - if builder.format != 'html': - return False - elif builder.name == 'singlehtml': - return False - elif builder.name.startswith('epub') and not builder.config.viewcode_enable_epub: - return False - else: - return True - - -def doctree_read(app: Sphinx, doctree: Node) -> None: - env = app.builder.env - if not hasattr(env, '_viewcode_modules'): - env._viewcode_modules = {} # type: ignore - - def has_tag(modname: str, fullname: str, docname: str, refname: str) -> bool: - entry = env._viewcode_modules.get(modname, None) # type: ignore - if entry is False: - return False - - code_tags = app.emit_firstresult('viewcode-find-source', modname) - if code_tags is None: - try: - analyzer = ModuleAnalyzer.for_module(modname) - analyzer.find_tags() - except Exception: - env._viewcode_modules[modname] = False # type: ignore - return False - - code = analyzer.code - tags = analyzer.tags - else: - code, tags = code_tags - - if entry is None or entry[0] != code: - entry = code, tags, {}, refname - env._viewcode_modules[modname] = entry # type: ignore - _, tags, used, _ = entry - if fullname in tags: - used[fullname] = docname - return True - - return False - - for objnode in list(doctree.findall(addnodes.desc)): - if objnode.get('domain') != 'py': - continue - names: Set[str] = set() - for signode in objnode: - if not isinstance(signode, addnodes.desc_signature): - continue - modname = signode.get('module') - fullname = signode.get('fullname') - try: - if fullname and modname==None: - if fullname.split('.')[-1].lower() == fullname.split('.')[-1] and fullname.split('.')[-2].lower() != fullname.split('.')[-2]: - modname = '.'.join(fullname.split('.')[:-2]) - fullname = '.'.join(fullname.split('.')[-2:]) - else: - modname = '.'.join(fullname.split('.')[:-1]) - fullname = fullname.split('.')[-1] - fullname_new = fullname - except Exception: - logger.warning(f'error_modename:{modname}') - logger.warning(f'error_fullname:{fullname}') - refname = modname - if env.config.viewcode_follow_imported_members: - new_modname = app.emit_firstresult( - 'viewcode-follow-imported', modname, fullname, - ) - if not new_modname: - new_modname = _get_full_modname(app, modname, fullname) - modname = new_modname - # logger.warning(f'new_modename:{modname}') - if not modname: - continue - # fullname = signode.get('fullname') - # if fullname and modname==None: - fullname = fullname_new - if not has_tag(modname, fullname, env.docname, refname): - continue - if fullname in names: - # only one link per name, please - continue - names.add(fullname) - pagename = posixpath.join(OUTPUT_DIRNAME, modname.replace('.', '/')) - signode += viewcode_anchor(reftarget=pagename, refid=fullname, refdoc=env.docname) - - -def env_merge_info(app: Sphinx, env: BuildEnvironment, docnames: Iterable[str], - other: BuildEnvironment) -> None: - if not hasattr(other, '_viewcode_modules'): - return - # create a _viewcode_modules dict on the main environment - if not hasattr(env, '_viewcode_modules'): - env._viewcode_modules = {} # type: ignore - # now merge in the information from the subprocess - for modname, entry in other._viewcode_modules.items(): # type: ignore - if modname not in env._viewcode_modules: # type: ignore - env._viewcode_modules[modname] = entry # type: ignore - else: - if env._viewcode_modules[modname]: # type: ignore - used = env._viewcode_modules[modname][2] # type: ignore - for fullname, docname in entry[2].items(): - if fullname not in used: - used[fullname] = docname - - -def env_purge_doc(app: Sphinx, env: BuildEnvironment, docname: str) -> None: - modules = getattr(env, '_viewcode_modules', {}) - - for modname, entry in list(modules.items()): - if entry is False: - continue - - code, tags, used, refname = entry - for fullname in list(used): - if used[fullname] == docname: - used.pop(fullname) - - if len(used) == 0: - modules.pop(modname) - - -class ViewcodeAnchorTransform(SphinxPostTransform): - """Convert or remove viewcode_anchor nodes depends on builder.""" - default_priority = 100 - - def run(self, **kwargs: Any) -> None: - if is_supported_builder(self.app.builder): - self.convert_viewcode_anchors() - else: - self.remove_viewcode_anchors() - - def convert_viewcode_anchors(self) -> None: - for node in self.document.findall(viewcode_anchor): - anchor = nodes.inline('', _('[源代码]'), classes=['viewcode-link']) - refnode = make_refnode(self.app.builder, node['refdoc'], node['reftarget'], - node['refid'], anchor) - node.replace_self(refnode) - - def remove_viewcode_anchors(self) -> None: - for node in list(self.document.findall(viewcode_anchor)): - node.parent.remove(node) - - -def missing_reference(app: Sphinx, env: BuildEnvironment, node: Element, contnode: Node - ) -> Optional[Node]: - # resolve our "viewcode" reference nodes -- they need special treatment - if node['reftype'] == 'viewcode': - warnings.warn('viewcode extension is no longer use pending_xref node. ' - 'Please update your extension.', RemovedInSphinx50Warning) - return make_refnode(app.builder, node['refdoc'], node['reftarget'], - node['refid'], contnode) - - return None - - -def get_module_filename(app: Sphinx, modname: str) -> Optional[str]: - """Get module filename for *modname*.""" - source_info = app.emit_firstresult('viewcode-find-source', modname) - if source_info: - return None - else: - try: - filename, source = ModuleAnalyzer.get_module_source(modname) - return filename - except Exception: - return None - - -def should_generate_module_page(app: Sphinx, modname: str) -> bool: - """Check generation of module page is needed.""" - module_filename = get_module_filename(app, modname) - if module_filename is None: - # Always (re-)generate module page when module filename is not found. - return True - - builder = cast(StandaloneHTMLBuilder, app.builder) - basename = modname.replace('.', '/') + builder.out_suffix - page_filename = path.join(app.outdir, '_modules/', basename) - - try: - if path.getmtime(module_filename) <= path.getmtime(page_filename): - # generation is not needed if the HTML page is newer than module file. - return False - except IOError: - pass - - return True - - -def collect_pages(app: Sphinx) -> Generator[Tuple[str, Dict[str, Any], str], None, None]: - env = app.builder.env - if not hasattr(env, '_viewcode_modules'): - return - if not is_supported_builder(app.builder): - return - highlighter = app.builder.highlighter # type: ignore - urito = app.builder.get_relative_uri - - modnames = set(env._viewcode_modules) # type: ignore - - for modname, entry in status_iterator( - sorted(env._viewcode_modules.items()), # type: ignore - __('highlighting module code... '), "blue", - len(env._viewcode_modules), # type: ignore - app.verbosity, lambda x: x[0]): - if not entry: - continue - if not should_generate_module_page(app, modname): - continue - - code, tags, used, refname = entry - # construct a page name for the highlighted source - pagename = posixpath.join(OUTPUT_DIRNAME, modname.replace('.', '/')) - # highlight the source using the builder's highlighter - if env.config.highlight_language in ('python3', 'default', 'none'): - lexer = env.config.highlight_language - else: - lexer = 'python' - highlighted = highlighter.highlight_block(code, lexer, linenos=False) - # split the code into lines - lines = highlighted.splitlines() - # split off wrap markup from the first line of the actual code - before, after = lines[0].split('
    ')
    -        lines[0:1] = [before + '
    ', after]
    -        # nothing to do for the last line; it always starts with 
    anyway - # now that we have code lines (starting at index 1), insert anchors for - # the collected tags (HACK: this only works if the tag boundaries are - # properly nested!) - maxindex = len(lines) - 1 - for name, docname in used.items(): - type, start, end = tags[name] - backlink = urito(pagename, docname) + '#' + refname + '.' + name - lines[start] = ( - '
    %s' % (name, backlink, _('[文档]')) + - lines[start]) - lines[min(end, maxindex)] += '
    ' - # try to find parents (for submodules) - parents = [] - parent = modname - while '.' in parent: - parent = parent.rsplit('.', 1)[0] - if parent in modnames: - parents.append({ - 'link': urito(pagename, - posixpath.join(OUTPUT_DIRNAME, parent.replace('.', '/'))), - 'title': parent}) - parents.append({'link': urito(pagename, posixpath.join(OUTPUT_DIRNAME, 'index')), - 'title': _('Module code')}) - parents.reverse() - # putting it all together - context = { - 'parents': parents, - 'title': modname, - 'body': (_('

    Source code for %s

    ') % modname + - '\n'.join(lines)), - } - yield (pagename, context, 'page.html') - - if not modnames: - return - - html = ['\n'] - # the stack logic is needed for using nested lists for submodules - stack = [''] - for modname in sorted(modnames): - if modname.startswith(stack[-1]): - stack.append(modname + '.') - html.append('
      ') - else: - stack.pop() - while not modname.startswith(stack[-1]): - stack.pop() - html.append('
    ') - stack.append(modname + '.') - html.append('
  • %s
  • \n' % ( - urito(posixpath.join(OUTPUT_DIRNAME, 'index'), - posixpath.join(OUTPUT_DIRNAME, modname.replace('.', '/'))), - modname)) - html.append('' * (len(stack) - 1)) - context = { - 'title': _('Overview: module code'), - 'body': (_('

    All modules for which code is available

    ') + - ''.join(html)), - } - - yield (posixpath.join(OUTPUT_DIRNAME, 'index'), context, 'page.html') - - -def setup(app: Sphinx) -> Dict[str, Any]: - app.add_config_value('viewcode_import', None, False) - app.add_config_value('viewcode_enable_epub', False, False) - app.add_config_value('viewcode_follow_imported_members', True, False) - app.connect('doctree-read', doctree_read) - app.connect('env-merge-info', env_merge_info) - app.connect('env-purge-doc', env_purge_doc) - app.connect('html-collect-pages', collect_pages) - app.connect('missing-reference', missing_reference) - # app.add_config_value('viewcode_include_modules', [], 'env') - # app.add_config_value('viewcode_exclude_modules', [], 'env') - app.add_event('viewcode-find-source') - app.add_event('viewcode-follow-imported') - app.add_post_transform(ViewcodeAnchorTransform) - return { - 'version': sphinx.__display_version__, - 'env_version': 1, - 'parallel_read_safe': True - } diff --git a/docs/reinforcement/docs/_ext/rename_include.py b/docs/reinforcement/docs/_ext/rename_include.py deleted file mode 100644 index bf7dea25f3ee7fd371659e80a3551439fbddee5a..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/_ext/rename_include.py +++ /dev/null @@ -1,60 +0,0 @@ -"""Rename .rst file to .txt file for include directive.""" -import os -import re -import glob -import logging - -logging.basicConfig(level=logging.WARNING, format='%(message)s') -logger = logging.getLogger(__name__) - -origin = "rst" -replace = "txt" - -include_re = re.compile(r'\.\. include::\s+(.*?)(\.rst|\.txt)') -include_re_sub = re.compile(rf'(\.\. include::\s+(.*?))\.{origin}') - -# Specified file_name lists excluded from rename procedure. -whitepaper = ['operations.rst'] - -def repl(matchobj): - """Replace functions for matched.""" - if matchobj.group(2).split('/')[-1] + f'.{origin}' in whitepaper: - return matchobj.group(0) - return rf'{matchobj.group(1)}.{replace}' - -def rename_include(api_dir): - """ - Rename .rst file to .txt file for include directive. - - api_dir - api path relative. - """ - tar = [] - for root, _, files in os.walk(api_dir): - for file in files: - if not file.endswith('.rst'): - continue - try: - with open(os.path.join(root, file), 'r+', encoding='utf-8') as f: - content = f.read() - tar_ = include_re.findall(content) - if tar_: - tar_ = [i[0].split('/')[-1]+f'.{origin}' for i in tar_] - tar.extend(tar_) - sub = include_re_sub.findall(content) - if sub: - content_ = include_re_sub.sub(repl, content) - f.seek(0) - f.truncate() - f.write(content_) - except UnicodeDecodeError: - # pylint: disable=logging-fstring-interpolation - logger.warning(f"UnicodeDecodeError for: {file}") - - all_rst = glob.glob(f'{api_dir}/**/*.{origin}', recursive=True) - - for i in all_rst: - if os.path.dirname(i).endswith("api_python") or os.path.basename(i) in whitepaper: - continue - name = os.path.basename(i) - if name in tar: - os.rename(i, i.replace(f'.{origin}', f'.{replace}')) diff --git a/docs/reinforcement/docs/requirements.txt b/docs/reinforcement/docs/requirements.txt deleted file mode 100644 index a1b6a69f6dbd9c6f78710f56889e14f0e85b27f4..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/requirements.txt +++ /dev/null @@ -1,7 +0,0 @@ -sphinx == 4.4.0 -docutils == 0.17.1 -myst-parser == 0.18.1 -sphinx_rtd_theme == 1.0.0 -numpy -IPython -jieba diff --git a/docs/reinforcement/docs/source_en/conf.py b/docs/reinforcement/docs/source_en/conf.py deleted file mode 100644 index f5dd393cf298d0ae60f59aef88c3106a93b9d1cf..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/source_en/conf.py +++ /dev/null @@ -1,169 +0,0 @@ -# Configuration file for the Sphinx documentation builder. -# -# This file only contains a selection of the most common options. For a full -# list see the documentation: -# https://www.sphinx-doc.org/en/master/usage/configuration.html - -# -- Path setup -------------------------------------------------------------- - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# -import os -import shutil -import sys -import IPython -import re -sys.path.append(os.path.abspath('../_ext')) -from sphinx.ext import autodoc as sphinx_autodoc - -import mindspore_rl - -# -- Project information ----------------------------------------------------- - -project = 'MindSpore Reinforcement' -copyright = 'MindSpore' -author = 'MindSpore' - -# The full version, including alpha/beta/rc tags -release = 'master' - - -# -- General configuration --------------------------------------------------- - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -myst_enable_extensions = ["dollarmath", "amsmath"] - - -myst_heading_anchors = 5 -extensions = [ - 'sphinx.ext.autodoc', - 'sphinx.ext.doctest', - 'sphinx.ext.intersphinx', - 'sphinx.ext.todo', - 'sphinx.ext.coverage', - 'sphinx.ext.napoleon', - 'sphinx.ext.viewcode', - 'myst_parser', - 'sphinx.ext.mathjax', - 'IPython.sphinxext.ipython_console_highlighting' -] - -source_suffix = { - '.rst': 'restructuredtext', - '.md': 'markdown', -} - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This pattern also affects html_static_path and html_extra_path. -mathjax_path = 'https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/mathjax/MathJax-3.2.2/es5/tex-mml-chtml.js' - -mathjax_options = { - 'async':'async' -} - -smartquotes_action = 'De' - -exclude_patterns = [] - -pygments_style = 'sphinx' - -# -- Options for HTML output ------------------------------------------------- - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = 'sphinx_rtd_theme' - -import sphinx_rtd_theme -layout_target = os.path.join(os.path.dirname(sphinx_rtd_theme.__file__), 'layout.html') -layout_src = '../../../../resource/_static/layout.html' -if os.path.exists(layout_target): - os.remove(layout_target) -shutil.copy(layout_src, layout_target) - -html_search_language = 'en' - -# Example configuration for intersphinx: refer to the Python standard library. -intersphinx_mapping = { - 'python': ('https://docs.python.org/3', '../../../../resource/python_objects.inv'), -} - -# Modify default signatures for autodoc. -autodoc_source_path = os.path.abspath(sphinx_autodoc.__file__) -autodoc_source_re = re.compile(r'stringify_signature\(.*?\)') -get_param_func_str = r"""\ -import re -import inspect as inspect_ - -def get_param_func(func): - try: - source_code = inspect_.getsource(func) - if func.__doc__: - source_code = source_code.replace(func.__doc__, '') - all_params_str = re.findall(r"def [\w_\d\-]+\(([\S\s]*?)(\):|\) ->.*?:)", source_code) - all_params = re.sub("(self|cls)(,|, )?", '', all_params_str[0][0].replace("\n", "").replace("'", "\"")) - return all_params - except: - return '' - -def get_obj(obj): - if isinstance(obj, type): - return obj.__init__ - - return obj -""" - -with open(autodoc_source_path, "r+", encoding="utf8") as f: - code_str = f.read() - code_str = autodoc_source_re.sub('"(" + get_param_func(get_obj(self.object)) + ")"', code_str, count=0) - exec(get_param_func_str, sphinx_autodoc.__dict__) - exec(code_str, sphinx_autodoc.__dict__) - -# Copy source files of chinese python api from mindscience repository. -from sphinx.util import logging -logger = logging.getLogger(__name__) - -src_dir_rm = os.path.join(os.getenv("RM_PATH"), 'docs/api/api_python_en') - -present_path = os.path.dirname(__file__) - -for i in os.listdir(src_dir_rm): - if os.path.isfile(os.path.join(src_dir_rm,i)): - if os.path.exists('./'+i): - os.remove('./'+i) - shutil.copy(os.path.join(src_dir_rm,i),'./'+i) - else: - if os.path.exists('./'+i): - shutil.rmtree('./'+i) - shutil.copytree(os.path.join(src_dir_rm,i),'./'+i) - -sys.path.append(os.path.abspath('../../../../resource/sphinx_ext')) -# import anchor_mod -import nbsphinx_mod - -sys.path.append(os.path.abspath('../../../../resource/search')) -import search_code - -sys.path.append(os.path.abspath('../../../../resource/custom_directives')) -from custom_directives import IncludeCodeDirective - -def setup(app): - app.add_directive('includecode', IncludeCodeDirective) - -src_release = os.path.join(os.getenv("RM_PATH"), 'RELEASE.md') -des_release = "./RELEASE.md" -with open(src_release, "r", encoding="utf-8") as f: - data = f.read() -if len(re.findall("\n## (.*?)\n",data)) > 1: - content = re.findall("(## [\s\S\n]*?)\n## ", data) -else: - content = re.findall("(## [\s\S\n]*)", data) -#result = content[0].replace('# MindSpore', '#', 1) -with open(des_release, "w", encoding="utf-8") as p: - p.write("# Release Notes"+"\n\n") - p.write(content[0]) \ No newline at end of file diff --git a/docs/reinforcement/docs/source_en/custom_config_info.md b/docs/reinforcement/docs/source_en/custom_config_info.md deleted file mode 100644 index 8fb76f38c10d930c98949a9edbfea443e0ac7be5..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/source_en/custom_config_info.md +++ /dev/null @@ -1,190 +0,0 @@ -# MindSpore RL Configuration Instruction - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/reinforcement/docs/source_en/custom_config_info.md) -   - -## Overview - -Recent years, deep reinforcement learning is developing by leaps and bounds, new algorithms come out every year. To offer high scalability and reuable reinforcement framework, MindSpore RL separates an algorithm into several parts, such as Actor, Learner, Policy, Environment, ReplayBuffer, etc. Moreover, due to the complexity of deep reinforcement learning algorithm, its performance is largely influenced by different hyper-parameters. MindSpore RL provides centeral configuration API, which decouples the algorithm from deployment and execution considerations to help users adjust model and algorithm conveniently. - -This instruction uses DQN algorithm as an example to introduce how to use this configuration API, and help users customize their algorithms. - -You can obtain the code of DQN algorithm from [https://github.com/mindspore-lab/mindrl/tree/master/example/dqn](https://github.com/mindspore-lab/mindrl/tree/master/example/dqn). - -## Configuration Details - -MindSpore RL uses `algorithm_config` to define each algorithm component and corresponding hyper-parameters. `algorithm_config` is a Python dictionary, which describes actor, learner, policy, collect_environment, eval_environment and replay buffer respectively. Framework can arrange the execution and deployment, which means that user only needs to focus on the algorithm design. - -The following code defines a set of algorithm configurations and uses `algorithm_config` to create a `Session`. `Session` is responsible for allocating resources and executing computational graph compilation and execution. - -```python -from mindspore_rl.mindspore_rl import Session -algorithm_config = { - 'actor': {...}, - 'learner': {...}, - 'policy_and_network': {...}, - 'collect_environment': {...}, - 'eval_environment': {...}, - 'replay_buffer': {...} -} - -session = Session(algorithm_config) -session.run(...) -``` - -Each parameter and their instruction in algorithm_config will be described below. - -### Policy Configuration - -Policy is usually used to determine the behaviour (or action) that agent will execute in the next step, it takes `type` and `params` as the subitems. - -- `type` : specify the name of Policy, Actor determines the action through Policy. In deep reinforcement learning, Policy usually uses deep neural network to extract the feature of environment, and outputs the action in the next step. -- `params` : specify the parameter that used during creating the instance of Policy. One thing should be noticed is that `type` and `params` need to be matched. - -```python -from dqn.src.dqn import DQNPolicy - -policy_params = { - 'epsi_high': 0.1, # epsi_high/epsi_low/decay control the proportion of exploitation and exploration - 'epsi_low': 0.1, # epsi_high: the highest probability of exploration, epsi_low: the lowest probability of exploration - 'decay': 200, # decay: the step decay - 'state_space_dim': 0, # the dimension of state space, 0 means that it will read from the environment automatically - 'action_space_dim': 0, # the dimension of action space, 0 means that it will read from the environment automatically - 'hidden_size': 100, # the dimension of hidden layer -} - -algorithm_config = { - ... - 'policy_and_network': { - 'type': DQNPolicy, - 'params': policy_params, - }, - ... -} -``` - -| key | Type | Range | Description | -| :--------------: | :--------: | :-------------------------------------: | :----------------------------------------------------------: | -| type | Class | The user-defined class | This type is the same name as user-defined class | -| params(optional) | Dictionary | Any value with key value format or None | Customized parameter, user can input any value with key value format | - -### Environment Configuration - -`collect_environment` and `eval_environment` are used to collect experience during interaction with environment and evaluate model after training respectively. `number`, `type` and `params` need to be provided to create their instances. - -- `number`: number of environment used in the algorithm. -- `type` : specify the name of environment, which could be either environment from MindSpore RL, such as `GymEnvironment` or user defined environment. -- `params` : specify the parameter that used during creating the instance of environment. One thing should be noticed is that `type` and `params` need to be matched. - -The following example defines the configuration of environment. Framework will create a `CartPole-v0` environment like `Environment(name='CartPole-v0')` . The configuration of `collect_environment` and `eval_environment` are the same. - -```python -from mindspore_rl.environment import GymEnvironment -collect_env_params = {'name': 'CartPole-v0'} -eval_env_params = {'name': 'CartPole-v0'} -algorithm_config = { - ... - 'collect_environment': { - 'number': 1, - 'type': GymEnvironment, # the class name of environment - 'params': collect_env_params # parameter of environment - }, - 'eval_environment': { - 'number': 1, - 'type': GymEnvironment, # the class name of environment - 'params': eval_env_params # parameter of environment - }, - ... -} -``` - -| key | Type | Range | Description | -| :--------------------: | :--------: | :----------------------------------------------------------: | :----------------------------------------------------------: | -| number (optional) | Integer | [1, +∞) | When user fills the number of environment, number must be larger than 0. When user does not fill it, framework will not wrap environment by `MultiEnvironmentWrapper` | -| num_parallel(optional) | Integer | [1, number] | If user does not fill it, the environment will run in parallel by default. User can fill num_parallel: 1 to turn off the parallel environment, or enter their own parallel configuration | -| type | Class | The subclass of environment that is user-defined and implemented | The class name of environment | -| params | Dictionary | Any value with key value format or None | Customized parameter, user can input any value with key value format | - -### Actor Configuration - -`Actor` is charge of interacting with environment. Generally, `Actor` interacts with `Env` through `Policy`. Some algorithms will store the experience which obtained during the interaction into `ReplayBuffer`. Therefore, `Actor` will hold the `Policy` and `Environment` and create the `ReplayBuffer` as needed. In Actor configuration, `policies` and `networks` need to specify the name of member variable in `Policy`. - -The following code defines the configuration of `DQNActor` . Framework will create the instance of Actor like `DQNActor(algorithm_config['actor'])`. - -```python -algorithm_config = { - ... - 'actor': { - 'number': 1, # the number of Actor - 'type': DQNActor, # the class name of Actor - 'policies': ['init_policy', 'collect_policy', 'eval_policy'], # Take the policies that called init_policy, collect_policy and eval_policy in Policy class as input to create the instance of actor - 'share_env': True # Whether the environment is shared by each actor - } - ... -} -``` - -| key | Type | Range | Description | -| :-----------------: | :------------: | :--------------------------------------------------------: | :----------------------------------------------------------: | -| number | Integer | [1, +∞) | Number of Actor, currently only support 1 | -| type | Class | The subclass of actor that is user-defined and implemented | This type is the same name as the subclass of actor that is user-defined and implemented | -| params(optional) | Dictionary | Any value with key value format or None | Customized parameter, user can input any value with key value format | -| policies | List of String | Same variable name as the user-defined policies | Every string in list must correspond one-to-one with the name of the policies initialized in the user-defined policy class | -| networks(optional) | List of String | Same variable name as the user-defined networks | Every string in list must correspond one-to-one with the name of the networks initialized in the user-defined policy class | -| share_env(optional) | Boolean | True or False | Default: True, means every actor will share one `collect_environment`. Else, we will create an instance of `collect_environment` for each actor. | - -### ReplayBuffer Configuration - -For part of algorithms, `ReplayBuffer` is used to store experience which is obtained by interaction between actor and environment. Then experience will be used to train the network. - -```python -from mindspore_rl.core.replay_buffer import ReplayBuffer -algorithm_config = { - ... - 'replay_buffer': {'number': 1, - 'type': ReplayBuffer, - 'capacity': 100000, # the capacity of ReplayBuffer - 'sample_size': 64, # sample Batch Size - 'data_shape': [(4,), (1,), (1,), (4,)], # the dimension info of ReplayBuffer - 'data_type': [ms.float32, ms.int32, ms.float32, ms.float32]}, # the data type of ReplayBuffer -} -``` - -| key | Type | Range | Description | -| :-------------------: | :-------------------------: | :-----------------------------------------: | :----------------------------------------------------------: | -| number | Integer | [1, +∞) | Number of replaybuffer created | -| type | Class | User-defined or provided ReplayBuffer class | This type is the same name as the user-defined or provided ReplayBuffer class | -| capacity | Integer | [0, +∞) | The capacity of ReplayBuffer | -| data_shape | List of Integer Tuple | [0, +∞) | The first number of tuple must equal to number of environment | -| data_type | List of mindspore data type | Belongs to MindSpore data type | The length of this list must equal to the length of data_shape | -| sample_size(optional) | Integer | [0, capacity] | The maximum value is the capacity of replay buffer. Default 1 | - -### Learner Configuration - -`Learner` is used to update the weights of neural network according to experience. `Learner` holds the DNN which is defined in `Policy` (the name of member variable in `Policy` match with the contains in `networks`), which is used to calculate the loss and update the weights of neural network. - -The following code defines the configuration of `DQNLearner` . Framework will create the instance of Learner like `DQNLearner(algorithm_config['learner'])`. - -```python -from dqn.src.dqn import DQNLearner -learner_params = {'gamma': 0.99, - 'lr': 0.001, # learning rate - } -algorithm_config = { - ... - 'learner': { - 'number': 1, # the number of Learner - 'type': DQNLearner, # the class name of Learner - 'params': learner_params, # the decay rate - 'networks': ['policy_network', 'target_network'] # Learner takes the policy_network and target_network from DQNPolicy as input argument to update the network - }, - ... -} -``` - -| key | Type | Range | Description | -| :------: | :------------: | :------------------------------------------------: | :----------------------------------------------------------: | -| number | Integer | [1, +∞) | Number of Actor, currently only support 1 | -| type | Class | The user-defined and implement subclass of learner | This type is the same name as the subclass of learner that is user-defined and implemented | -| params(optional) | Dictionary | Any value with key value format or None | Customized parameter, user can input any value with key value format. | -| networks | List of String | Same variable name as the user-defined network | Every string in list must match with networks' name which is user initialized in defined policy class | diff --git a/docs/reinforcement/docs/source_en/dqn.md b/docs/reinforcement/docs/source_en/dqn.md deleted file mode 100644 index 96b571353b205e6532aa090c7bff58fe5f56e902..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/source_en/dqn.md +++ /dev/null @@ -1,410 +0,0 @@ -# Deep Q Learning (DQN) with MindSpore Reinforcement - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/reinforcement/docs/source_en/dqn.md) - -## Summary - -To implement a reinforcement learning algorithm with MindSpore Reinforcement, a user needs to: - -- provide an algorithm configuration, which separates the implementation of the algorithm from its deployment details; -- implement the algorithm based on an actor-learner-environment abstraction; -- create a session object that executes the implemented algorithm. - -This tutorial shows the use of the MindSpore Reinforcement API to implement the Deep Q Learning (DQN) algorithm. Note that, for clarity and readability, only API-related code sections are presented, and irrelevant code is omitted. The source code of the full DQN implementation for MindSpore Reinforcement can be found [here](https://github.com/mindspore-lab/mindrl/tree/master/example/dqn). - -## Specifying the Actor-Learner-Environment Abstraction for DQN - -The DQN algorithm requires two deep neural networks, a *policy network* for approximating the action-value function (Q function) and a *target network* for stabilising the training. The policy network is the strategy on how to act on the environment, and the goal of the DQN algorithm is to train the policy network for maximum reward. In addition, the DQN algorithm uses an *experience replay* technique to maintain previous observations for off-policy learning, where an actor uses different behavioural policies to act on the environment. - -MindSpore Reinforcement uses an *algorithm configuration* to specify the logical components (Actor, Learner, Policy and Network, Collect Environment, Eval Environment, Replayuffer) required by the DQN algorithm and the associated hyperparameters. It can execute the algorithm with different strategies based on the provided configuration, which allows the user to focus on the algorithm design. - -The algorithm configuration is a Python dictionary that specifies how to construct different components of the DQN algorithm. The hyper-parameters of each component are configured in separate Python dictionaries. The DQN algorithm configuration can be defined as follows: - -```python -algorithm_config = { - 'actor': { - 'number': 1, # Number of Actor - 'type': DQNActor, # The Actor class - 'policies': ['init_policy', 'collect_policy', 'evaluate_policy'], # The policy used to choose action - }, - 'learner': { - 'number': 1, # Number of Learner - 'type': DQNLearner, # The Learner class - 'params': learner_params, # The parameters of Learner - 'networks': ['policy_network', 'target_network'] # The networks which is used by Learner - }, - 'policy_and_network': { - 'type': DQNPolicy, # The Policy class - 'params': policy_params # The parameters of Policy - }, - 'collect_environment': { - 'number': 1, # Number of Collect Environment - 'type': GymEnvironment, # The Collect Environment class - 'params': collect_env_params # The parameters of Collect Environment - }, - 'eval_environment': { - 'number': 1, # Same as Collect Environment - 'type': GymEnvironment, - 'params': eval_env_params - }, - 'replay_buffer': {'number': 1, # Number of ReplayBuffer - 'type': ReplayBuffer, # The ReplayBuffer class - 'capacity': 100000, # The capacity of ReplayBuffer - 'data_shape': [(4,), (1,), (1,), (4,)], # Data shape of ReplayBuffer - 'data_type': [ms.float32, ms.int32, ms.float32, ms.float32], # Data type off ReplayBuffer - 'sample_size': 64}, # Sample size of ReplayBuffer -} -``` - -The configuration defines six top-level entries, each corresponding to an algorithmic component: *actor, learner, policy*, *replaybuffer* and two *environment*s. Each entry corresponds to a class, which must be defined by the user to implement the DQN algorithm’s logic. - -A top-level entry has sub-entries that describe the component. The *number* entry defines the number of instances of the component used by the algorithm. The *type* entry refers to the name of the Python class that must be defined to implement the component. The *params* entry provides the necessary hyper-parameters for the component. The *policies* entry defines the policies used by the component. The *networks* in *earner* entry lists all neural networks used by this component. In the DQN example, only actors interact with the environment. The *reply_buffer* defines the *capacity, shape, sample size and data type* of the replay buffer. - -For the DQN algorithm, we configure one actor `'number': 1`, its Python class `'type': DQNActor`, and three behaviour policies `'policies': ['init_policy', 'collect_policy', 'evaluate_policy']`. - -Other components are defined in a similar way -- please refer to the [complete DQN code example](https://github.com/mindspore-lab/mindrl/tree/master/example/dqn) and the [MindSpore Reinforcement API documentation](https://www.mindspore.cn/reinforcement/docs/en/master/reinforcement.html) for more details. - -Note that MindSpore Reinforcement uses a single *policy* class to define all policies and neural networks used by the algorithm. In this way, it hides the complexity of data sharing and communication between policies and neural networks. - -In train.py, MindSpore Reinforcement executes the algorithm in the context of a *session*. A *Session* allocates resources (on one or more cluster machines) and executes the compiled computational graph. A user passes the algorithm configuration to instantiate a Session class: - -```python -from mindspore_rl.core import Session -dqn_session = Session(dqn_algorithm_config) -``` - -Invoke the `run` method and pass corresponding parameters to execute the DQN algorithm. *class_type* is user-defined Trainer class, which will be described later, episode is the iteration times of the algorithm, params are the parameters that is used in the trainer class. It is written in configuration file. For more detail, please check *config.py* file in the code example. Callbacks define some metrics methods. It is described more detailly in Callbacks part of API documentation. - -```python -from src.dqn_trainer import DQNTrainer -from mindspore_rl.utils.callback import CheckpointCallback, LossCallback, EvaluateCallback -loss_cb = LossCallback() -ckpt_cb = CheckpointCallback(50, config.trainer_params['ckpt_path']) -eval_cb = EvaluateCallback(10) -cbs = [loss_cb, ckpt_cb, eval_cb] -dqn_session.run(class_type=DQNTrainer, episode=episode, params=config.trainer_params, callbacks=cbs) -``` - -To leverage MindSpore's computational graph feature, users set the execution mode to `GRAPH_MODE`. - -```python -import mindspore as ms -ms.set_context(mode=ms.GRAPH_MODE) -``` - -Methods that are annotated with `@jit` will be compiled into the MindSpore computational graph for auto-parallelisation and acceleration. In this tutorial, we use this feature to implement an efficient `DQNTrainer` class. - -### Defining the DQNTrainer Class - -The `DQNTrainer` class expresses how the algorithm runs. For example, iteratively collects experience through iteracting with environment and insert to *ReplayBuffer*, then obtain the data from *ReplayBuffer* to trains the targeted models. It must inherit from the `Trainer` class, which is part of the MindSpore Reinforcement API. - -The `Trainer` base class contains an `MSRL` (MindSpore Reinforcement) object, which allows the algorithm implementation to interact with MindSpore Reinforcement to implement the training logic. The `MSRL` class instantiates the RL algorithm components based on the previously defined algorithm configuration. It provides the function handlers that transparently bind to methods of actors, learners, or the replay buffer object, as defined by users. As a result, the `MSRL` class enables users to focus on the algorithm logic, while it transparently handles object creation, data sharing and communication between different algorithmic components on one or more workers. Users instantiate the `MSRL` object by creating the previously mentioned `Session` object with the algorithm configuration. - -The `DQNTrainer` must overload the `train_one_episode` for training, `evaluate` for evaluation and `trainable_varaible` for saving checkpoint. In this tutorial, it is defined as follows: - -```python -class DQNTrainer(Trainer): - def __init__(self, msrl, params): - ... - super(DQNTrainer, self).__init__(msrl) - - def trainable_variables(self): - """Trainable variables for saving.""" - trainable_variables = {"policy_net": self.msrl.learner.policy_network} - return trainable_variables - - @ms.jit - def init_training(self): - """Initialize training""" - state = self.msrl.collect_environment.reset() - done = self.false - i = self.zero_value - while self.less(i, self.fill_value): - done, _, new_state, action, my_reward = self.msrl.agent_act( - trainer.INIT, state) - self.msrl.replay_buffer_insert( - [state, action, my_reward, new_state]) - state = new_state - if done: - state = self.msrl.collect_environment.reset() - done = self.false - i += 1 - return done - - @ms.jit - def evaluate(self): - """Policy evaluate""" - total_reward = self.zero_value - eval_iter = self.zero_value - while self.less(eval_iter, self.num_evaluate_episode): - episode_reward = self.zero_value - state = self.msrl.eval_environment.reset() - done = self.false - while not done: - done, r, state = self.msrl.agent_act(trainer.EVAL, state) - r = self.squeeze(r) - episode_reward += r - total_reward += episode_reward - eval_iter += 1 - avg_reward = total_reward / self.num_evaluate_episode - return avg_reward -``` - -User will call the `train` method in base class. It trains the models for the specified number of episodes (iterations), with each episode calling the user-defined `train_one_episode` method. Finally, the train method evaluates the policy to obtain a reward value by calling the `evaluate` method. - -In each iteration of the training loop, the `train_one_episode` method is invoked to train an episode: - -```python -@ms.jit -def train_one_episode(self): - """Train one episode""" - if not self.inited: - self.init_training() - self.inited = self.true - state = self.msrl.collect_environment.reset() - done = self.false - total_reward = self.zero - steps = self.zero - loss = self.zero - while not done: - done, r, new_state, action, my_reward = self.msrl.agent_act( - trainer.COLLECT, state) - self.msrl.replay_buffer_insert( - [state, action, my_reward, new_state]) - state = new_state - r = self.squeeze(r) - loss = self.msrl.agent_learn(self.msrl.replay_buffer_sample()) - total_reward += r - steps += 1 - if not self.mod(steps, self.update_period): - self.msrl.learner.update() - return loss, total_reward, steps -``` - -The `@jit` annotation states that this method will be compiled into a MindSpore computational graph for acceleration. To support this, all scalar values must be defined as tensor types, e.g. `self.zero_value = Tensor(0, mindspore.float32)`. - -The `train_one_episode` method first calls the `reset` method of environment, `self.msrl.collect_environment.reset()` function to reset the environment. It then collects the experience from the environment with the `msrl.agent_act` function handler and inserts the experience data in the replay buffer using the `self.msrl.replay_buffer_insert` function. Afterwards, it invokes the `self.msrl.agent_learn` function to train the target model. The input of `self.msrl.agent_learn` is a set of sampled results returned by `self.msrl.replay_buffer_sample`. - -The replay buffer class, `ReplayBuffer`, is provided by MindSpore Reinforcement. It defines `insert` and `sample` methods to store and sample the experience data in a replay buffer, respectively. Please refer to the [complete DQN code example](https://github.com/mindspore-lab/mindrl/tree/master/example/dqn) for details. - -### Defining the DQNPolicy Class - -To implement the neural networks and define the policies, a user defines the `DQNPolicy` class: - -```python -class DQNPolicy(): - def __init__(self, params): - self.policy_network = FullyConnectedNet( - params['state_space_dim'], - params['hidden_size'], - params['action_space_dim'], - params['compute_type']) - self.target_network = FullyConnectedNet( - params['state_space_dim'], - params['hidden_size'], - params['action_space_dim'], - params['compute_type']) -``` - -The constructor takes as input the previously-defined hyper-parameters of the Python dictionary type in config.py, `policy_params`. - -Before defining the policy network and the target network, users must define the structure of the neural networks using MindSpore operators. For example, they may be objects of the `FullyConnectedNetwork` class, which is defined as follows: - -```python -class FullyConnectedNetwork(mindspore.nn.Cell): - def __init__(self, input_size, hidden_size, output_size, compute_type=mstype.float32): - super(FullyConnectedNet, self).__init__() - self.linear1 = nn.Dense( - input_size, - hidden_size, - weight_init="XavierUniform").to_float(compute_type) - self.linear2 = nn.Dense( - hidden_size, - output_size, - weight_init="XavierUniform").to_float(compute_type) - self.relu = nn.ReLU() -``` - -The DQN algorithm uses a loss function to optimize the weights of the neural networks. At this point, a user must define a neural network used to compute the loss function. This network is specified as a nested class of `DQNLearner`. In addition, an optimizer is required to train the network. The optimizer and the loss function are defined as follows: - -```python -class DQNLearner(Learner): - """DQN Learner""" - - class PolicyNetWithLossCell(nn.Cell): - """DQN policy network with loss cell""" - - def __init__(self, backbone, loss_fn): - super(DQNLearner.PolicyNetWithLossCell, - self).__init__(auto_prefix=False) - self._backbone = backbone - self._loss_fn = loss_fn - self.gather = P.GatherD() - - def construct(self, x, a0, label): - """constructor for Loss Cell""" - out = self._backbone(x) - out = self.gather(out, 1, a0) - loss = self._loss_fn(out, label) - return loss - def __init__(self, params=None): - super(DQNLearner, self).__init__() - ... - optimizer = nn.Adam( - self.policy_network.trainable_params(), - learning_rate=params['lr']) - loss_fn = nn.MSELoss() - loss_q_net = self.PolicyNetWithLossCell(self.policy_network, loss_fn) - self.policy_network_train = nn.TrainOneStepCell(loss_q_net, optimizer) - self.policy_network_train.set_train(mode=True) - ... -``` - -The DQN algorithm is an *off-policy* algorithm that learns using a epsilon-greedy policy. It uses different behavioural policies for acting on the environment and collecting data. In this example, we use the `RandomPolicy` to initialize the training, the `EpsilonGreedyPolicy` to collect the experience during the training, and the `GreedyPolicy` to evaluate: - -```python -class DQNPolicy(): - def __init__(self, params): - ... - self.init_policy = RandomPolicy(params['action_space_dim']) - self.collect_policy = EpsilonGreedyPolicy(self.policy_network, (1, 1), params['epsi_high'], - params['epsi_low'], params['decay'], params['action_space_dim']) - self.evaluate_policy = GreedyPolicy(self.policy_network) -``` - -Since the above three behavioural policies are common for a range of RL algorithms, MindSpore Reinforcement provides them as reusable building blocks. Users may also define their own algorithm-specific behavioural policies. - -Note that the names of the methods and the keys of the parameter dictionary must be consistent with the algorithm configuration defined earlier. - -### Defining the DQNActor Class - -To implement the `DQNActor`, a user defines a new actor component that inherits from the `Actor` class provided by MindSpore Reinforcement. They must then overload the methods in `Actor` class: - -```python -class DQNActor(Actor): - ... - def act(self, phase, params): - if phase == 1: - # Fill the replay buffer - action = self.init_policy() - new_state, reward, done = self._environment.step(action) - action = self.reshape(action, (1,)) - my_reward = self.select(done, self.penalty, self.reward) - return done, reward, new_state, action, my_reward - if phase == 2: - # Experience collection - self.step += 1 - ts0 = self.expand_dims(params, 0) - step_tensor = self.ones((1, 1), ms.float32) * self.step - - action = self.collect_policy(ts0, step_tensor) - new_state, reward, done = self._environment.step(action) - action = self.reshape(action, (1,)) - my_reward = self.select(done, self.penalty, self.reward) - return done, reward, new_state, action, my_reward - if phase == 3: - # Evaluate the trained policy - ts0 = self.expand_dims(params, 0) - action = self.evaluate_policy(ts0) - new_state, reward, done = self._eval_env.step(action) - return done, reward, new_state - self.print("Phase is incorrect") - return 0 -``` - -The three methods act on the specified environment with different policies, which map states to actions. The methods take as input a tensor-typed value and return the trajectory from the environment. - -To interact with the environment, the Actor uses the `step(action)` method defined in the `Environment` class. For an action applied to the specified environment, this method reacts and returns a ternary. The ternary includes the new state after applying the previous action, the reward obtained as a floating-point type, and boolean flags for terminating the episode and resetting the environment. - -`ReplayBuffer` defines an `insert` method, which is called by the `DQNActor` object to store the experience data in the playback buffer. - -`Environment` class and `ReplayBuffer` class are provided by MindSpore Reinforcement API. - -The constructor of the `DQNActor` class defines the environment, the reply buffer, the polices, and the networks. It takes as input the dictionary-typed parameters, which were defined in the algorithm configuration. Below, we only show the initialisation of the environment, other attributes are assigned in the similar way: - -```python -class DQNActor(Actor): - def __init__(self, params): - self._environment = params['collect_environment'] - self._eval_env = params['eval_environment'] - ... -``` - -### Defining the DQNLearner Class - -To implement the `DQNLearner`, a class must inherit from the `Learner` class in the MindSpore Reinforcement API and overload the `learn` method: - -```python -class DQNLearner(Learner): - ... - def learn(self, experience): - """Model update""" - s0, a0, r1, s1 = experience - next_state_values = self.target_network(s1) - next_state_values = next_state_values.max(axis=1) - r1 = self.reshape(r1, (-1,)) - - y_true = r1 + self.gamma * next_state_values - - # Modify last step reward - one = self.ones_like(r1) - y_true = self.select(r1 == -one, one, y_true) - y_true = self.expand_dims(y_true, 1) - - success = self.policy_network_train(s0, a0, y_true) - return success -``` - -Here, the `learn` method takes as input the trajectory (sampled from a reply buffer) to train the policy network. The constructor assigns the network, the policy, and the discount rate to the DQNLearner, by receiving a dictionary-typed configuration from the algorithm configuration: - -```python -class DQNLearner(Learner): - def __init__(self, params=None): - super(DQNLearner, self).__init__() - self.policy_network = params['policy_network'] - self.target_network = params['target_network'] -``` - -## Executing and Viewing Results - -Execute script `train.py` to start DQN model training. - -```python -cd example/dqn/ -python train.py -``` - -The execution results are shown below: - -```text ------------------------------------------ -Evaluation result in episode 0 is 95.300 ------------------------------------------ -Episode 0, steps: 33.0, reward: 33.000 -Episode 1, steps: 45.0, reward: 12.000 -Episode 2, steps: 54.0, reward: 9.000 -Episode 3, steps: 64.0, reward: 10.000 -Episode 4, steps: 73.0, reward: 9.000 -Episode 5, steps: 82.0, reward: 9.000 -Episode 6, steps: 91.0, reward: 9.000 -Episode 7, steps: 100.0, reward: 9.000 -Episode 8, steps: 109.0, reward: 9.000 -Episode 9, steps: 118.0, reward: 9.000 -... -... -Episode 200, steps: 25540.0, reward: 200.000 -Episode 201, steps: 25740.0, reward: 200.000 -Episode 202, steps: 25940.0, reward: 200.000 -Episode 203, steps: 26140.0, reward: 200.000 -Episode 204, steps: 26340.0, reward: 200.000 -Episode 205, steps: 26518.0, reward: 178.000 -Episode 206, steps: 26718.0, reward: 200.000 -Episode 207, steps: 26890.0, reward: 172.000 -Episode 208, steps: 27090.0, reward: 200.000 -Episode 209, steps: 27290.0, reward: 200.000 ------------------------------------------ -Evaluation result in episode 210 is 200.000 ------------------------------------------ -``` - -![CartPole](../source_zh_cn/images/cartpole.gif) diff --git a/docs/reinforcement/docs/source_en/environment.md b/docs/reinforcement/docs/source_en/environment.md deleted file mode 100644 index 6985999b8c27383c8ebb83dfb5cb6fa4c28360f8..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/source_en/environment.md +++ /dev/null @@ -1,80 +0,0 @@ -# Reinforcement Learning Environment Access - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/reinforcement/docs/source_en/environment.md) - -## Overview - -In the field of reinforcement learning, learning strategy maximizes numerical gain signals during interaction between an intelligent body and its environment. The "environment" is an important element in the field of reinforcement learning as a problem to be solved. - -A wide variety of environments are currently used for reinforcement learning: [Mujoco](https://github.com/deepmind/mujoco), [MPE](https://github.com/openai/multiagent-particle-envs), [Atari]( https://github.com/gsurma/atari), [PySC2](https://www.github.com/deepmind/pysc2), [SMAC](https://github/oxwhirl/smac), [TORCS](https: //github.com/ugo-nama-kun/gym_torcs), [Isaac](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs), etc. Currently MindSpore Reinforcement is connected to two environments Gym and SMAC, and will gradually access more environments with the subsequent enrichment of algorithms. In this article, we will introduce how to access the third-party environment under MindSpore Reinforcement. - -## Encapsulating Environmental Python Functions as Operators - -Before that, introduce the static and dynamic graph modes. - -- In dynamic graph mode, the program is executed line by line in the order in which the code is written, and the compiler sends down the individual operators in the neural network to the device one by one for computation operations, making it easy for the user to write and debug the neural network model. - -- In static graph mode, the program compiles the developer-defined algorithm into a computation graph when the program is compiled for execution. In the process, the compiler can reduce resource overhead to obtain better execution performance by using graph optimization techniques. - -Since the syntax supported by the static graph mode is a subset of the Python language, and commonly-used environments generally use the Python interface to implement interactions. The syntax differences between the two often result in graph compilation errors. For this problem, developers can use the `PyFunc` operator to encapsulate a Python function as an operator in a MindSpore computation graph. - -Next, using gym as an example, encapsulate `env.reset()` as an operator in a MindSpore computation graph. - -The following code creates a `CartPole-v0` environment and executes the `env.reset()` method. You can see that the type of `state` is `numpy.ndarray`, and the data type and dimension are `np.float64` and `(4,)` respectively. - -```python -import gym - -env = gym.make('CartPole-v0') -state = env.reset() -print('type: {}, shape: {}, dtype: {}'.format(type(state), state.dtype, state.shape)) - -# Result: -# type: , shape: (4,), dtype: float64 -``` - -`env.reset()` is encapsulated into a MindSpore operator by using the `PyFunc` operator. - -- `fn` specifies the name of the Python function to be encapsulated, either as a normal function or as a member function. -- `in_types` and `in_shapes` specify the input data types and dimensions. `env.reset` has no input, so it fills in an empty list. -- `out_types`, `out_shapes` specify the data types and dimensions of the returned values. From the previous execution, it can be seen that `env.reset()` returns a numpy array with data type and dimension `np.float64` and `(4,)` respectively, so `[ms.float64,]` and `[(4,),]` are filled in. -- `PyFunc` returns tuple(Tensor). -- For more detailed instructions, refer to the [reference](https://gitee.com/mindspore/mindspore/blob/master/mindspore/python/mindspore/ops/operations/other_ops.py). - -## Decoupling Environment and Algorithms - -Reinforcement learning algorithms should usually have good generalization, e.g., an algorithm that solves `HalfCheetah` should also be able to solve `Pendulum`. In order to implement the generalization, it is necessary to decouple the environment from the rest of the algorithm, thus ensuring that the rest of the script is modified as little as possible after changing the environment. It is recommended that developers refer to `Environment` to encapsulate the environment. - -```python -class Environment(nn.Cell): - def __init__(self): - super(Environment, self).__init__(auto_prefix=False) - - def reset(self): - pass - - def step(self, action): - pass - - @property - def action_space(self) -> Space: - pass - - @property - def observation_space(self) -> Space: - pass - - @property - def reward_space(self) -> Space: - pass - - @property - def done_space(self) -> Space: - pass -``` - -`Environment` needs to provide methods such as `action_space` and `observation_space`, in addition to interfaces for interacting with the environment, such as `reset` and `step`, which return [Space](https://mindspore.cn/reinforcement/docs/en/master/reinforcement.html#mindspore_rl.environment.Space) type. The algorithm can achieve the following operations based on the `Space` information: - -- obtain the dimensions of the state space and action space in the environment, which used to construct the neural network. -- read the range of legal actions, and scale and crop the actions given by the policy network. -- Identify whether the action space of the environment is discrete or continuous, and choose whether to explore the environment by using a continuous or discrete distribution. diff --git a/docs/reinforcement/docs/source_en/index.rst b/docs/reinforcement/docs/source_en/index.rst deleted file mode 100644 index f6bc99f58375b1f8f8360580c62b1266421beb33..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/source_en/index.rst +++ /dev/null @@ -1,69 +0,0 @@ -MindSpore Reinforcement Documents -=================================== - -MindSpore Reinforcement is a reinforcement learning suite, which supports distributed training of agents by using reinforcement learning algorithms. - -MindSpore Reinforcement offers a clean API abstraction for writing reinforcement learning algorithms, which decouples the algorithm from deployment and execution considerations, including the use of accelerators, the level of parallelism and the distribution of computation across a cluster of workers. MindSpore Reinforcement translates the reinforcement learning algorithm into a series of compiled computational graphs, which are then run efficiently by the MindSpore framework on CPUs, GPUs, and Ascend AI processors. - -.. raw:: html - - - -Code repository address: - -Unique Design Features ------------------------ - -1. Offers an algorithmic-centric API for writing reinforcement learning algorithms - - In MindSpore Reinforcement, users describe reinforcement algorithms in Python in terms of intuitive algorithmic concepts, such as agents, actors, environments, and learners. Agents contain actors that interact with an environment and collect rewards. Based on the rewards, learners update policies that govern the behaviour of actors. Users can focus on the implementation of their algorithm without the framework getting in their way. - -2. Decouples reinforcement learning algorithms from their execution strategy - - The API exposed by MindSpore Reinforcement for implementing algorithms makes no assumptions about how the algorithm will be executed. MindSpore Reinforcement can therefore execute the same algorithm on a single laptop with one GPU and on a cluster of machines with many GPUs. Users provide a separate execution configuration, which describes the resources that MindSpore Reinforcement can use for training. - -3. Accelerates reinforcement learning algorithms efficiently - - MindSpore Reinforcement is designed to speed up the training of reinforcement learning algorithms by executing the computation on hardware accelerators, such as GPUs or Ascend AI processors. It not only accelerates the neural network computation, but it also translates the logic of actors and learners to computational graphs with parallelizable operators. These computational graphs are executed by the MindSpore framework, taking advantage of its compilation and auto-parallelisation features. - -Future Roadmap ---------------- - -- This initial release of MindSpore Reinforcement contains a stable API for implementing reinforcement learning algorithms and executing computation using MindSpore’s computational graphs. Now it supports semi-automatic distributed execution of algorithms and multi-agent, but does not support fully automatic distributed capabilities yet. These features will be included in the subsequent version of MindSpore Reinforcement. Please look forward to it. - -Typical MindSpore Reinforcement Application Scenarios ------------------------------------------------------- - -- `Train a deep Q network `_ - - The DQN algorithm uses an experience replay technique to maintain previous observations for off-policylearning. - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: Installation - - reinforcement_install - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: Guide - - custom_config_info - dqn - replaybuffer - environment - -.. toctree:: - :maxdepth: 1 - :caption: API References - - reinforcement - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: RELEASE NOTES - - RELEASE \ No newline at end of file diff --git a/docs/reinforcement/docs/source_en/reinforcement_install.md b/docs/reinforcement/docs/source_en/reinforcement_install.md deleted file mode 100644 index f25867d5da5782544c249528191df8e19647d0a1..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/source_en/reinforcement_install.md +++ /dev/null @@ -1,37 +0,0 @@ -# MindSpore Reinforcement Installation - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/reinforcement/docs/source_en/reinforcement_install.md) - -MindSpore Reinforcement depends on the MindSpore training and inference framework. Therefore, install [MindSpore](https://gitee.com/mindspore/mindspore#安装) and then MindSpore Reinforcement. You can install MindSpore Reinforcement either by pip or by source code. - -## Installation by pip - -If use the pip command, download the .whl package from the [MindSpore Reinforcement page](https://www.mindspore.cn/versions/en) and install it. - -```shell -pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{ms_version}/Reinforcement/any/mindspore_rl-{mr_version}-py3-none-any.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple -``` - -> - When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see requirements.txt). In other cases, you need to manually install dependency items. -> - `{ms_version}` refers to the MindSpore version that matches with MindSpore Reinforcement. For example, if you want to install MindSpore Reinforcement 0.1.0, then `{ms_version}` should be 1.5.0. -> - `{mr_version}` refers to the version of MindSpore Reinforcement. For example, when you are downloading MindSpore Reinforcement 0.1.0, `{mr_version}` should be 0.1.0. - -## Installation by Source Code - -Download [source code](https://github.com/mindspore-lab/mindrl), and enter `reinforcement` directory. - -```shell -bash build.sh -pip install output/mindspore_rl-0.1.0-py3-none-any.whl -``` - -The `build.sh` is the compile script under the `reinforcement` directory. - -## Installation Verification - -Execute the following command. If it prompts the following information, the installation is successful: - -```python -import mindspore_rl -``` - diff --git a/docs/reinforcement/docs/source_en/replaybuffer.md b/docs/reinforcement/docs/source_en/replaybuffer.md deleted file mode 100644 index 6572bc8747ec48dc11cdc4887a1f472aee5ca058..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/source_en/replaybuffer.md +++ /dev/null @@ -1,138 +0,0 @@ -# ReplayBuffer Usage Introduction - -[![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/reinforcement/docs/source_en/replaybuffer.md) - -## Brief Introduction of ReplayBuffer - -In reinforcement learning, ReplayBuffer is a common basic data storage method, whose functions is to store the data obtained from the interaction of an intelligent body with its environment. - -Solve the following problems by using ReplayBuffer: - -1. Stored historical data can be extracted by sampling to break the correlation of training data, so that the sampled data have independent and identically distributed characteristics. -2. Provide temporary storage of data and improve the utilization of data. - -## ReplayBuffer Implementation of MindSpore Reinforcement Learning - -Typically, algorithms people use native Python data structures or Numpy data structures to construct ReplayBuffer, or general reinforcement learning frameworks also provide standard API encapsulation. The difference is that MindSpore implements the ReplayBuffer structure on the device side. On the one hand, the structure can reduce the frequent copying of data between Host and Device when using GPU hardware, and on the other hand, expressing the ReplayBuffer in the form of MindSpore operator can build a complete IR graph and enable various graph optimizations of MindSpore GRAPH_MODE to improve the overall performance. - -In MindSpore, two kinds of ReplayBuffer are provided, UniformReplayBuffer and PriorityReplayBuffer, which are used for common FIFO storage and storage with priority, respectively. The following is an example of UniformReplayBuffer implementation and usage. - -ReplayBuffer is represented as a List of Tensors, and each Tensor represents a set of data stored by column (e.g., a set of [state, action, reward]). The data that is newly put into the UniformReplayBuffer is updated in a FIFO mechanism with insert, search, and sample functions. - -### Parameter Explanation - -Create a UniformReplayBuffer with the initialization parameters batch_size, capacity, shapes, and types. - -* batch_size indicates the size of the data at a time for sample, an integer value. -* capacity indicates the total capacity of the created UniformReplayBuffer, an integer value. -* shapes indicates the shape size of each set of data in Buffer, expressed as a list. -* types indicates the data type corresponding to each set of data in the Buffer, represented as a list. - -### Functions Introduction - -#### 1 Insert - -The insert method takes a set of data as input, and needs to satisfy that the shape and type of the data are the same as the created UniformReplayBuffer parameters. No output. -To simulate the FIFO characteristics of a circular queue, we use two cursors to determine the head and effective length count of the queue. The following figure shows the process of several insertion operations. - -1. The total size of the buffer is 6. In the initial state, the cursor head and count are both 0. -2. After inserting a batch_size of 2, the current head is unchanged and count is added by 2. -3. After continuing to insert a batch_size of 4, the queue is full and the count is 6. -4. After continuing to insert a batch_size of 2, overwrite updates the old data and adds 2 to the head. - -![insert schematic diagram](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/reinforcement/docs/source_zh_cn/images/insert.png) - -#### 2 Search - -The search method accepts an index as an input, indicating the specific location of the data to be found. The output is a set of Tensor, as shown in the following figure: - -1. If the UniformReplayBuffer is just full or not full, the corresponding data is found directly according to the index. -2. For data that has been overwritten, remap it by cursors. - -![get_item schematic diagram](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/reinforcement/docs/source_zh_cn/images/get.png) - -#### 3 Sample - -The sampling method has no input and the output is a set of Tensor with the size of the batch_size when the UniformReplayBuffer is created. This is shown in the following figure: -Assuming that batch_size is 3, a random set of indexes will be generated in the operator, and this random set of indexes has two cases: - -1. Order preserving: each index means the real data position, which needs to be remapped by cursor operation. -2. No order preserving: each index does not represent the real position and is obtained directly. - -Both approaches have a slight impact on randomness, and the default is to use no order preserving to get the best performance. - -![sample schematic diagram](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/reinforcement/docs/source_zh_cn/images/sample.png) - -## UniformReplayBuffer Introduction of MindSpore Reinforcement Learning - -### Creation of UniformReplayBuffer - -MindSpore Reinforcement Learning provides a standard ReplayBuffer API. The user can use the ReplayBuffer created by the framework by means of a configuration file, shaped like the configuration file of [dqn](https://github.com/mindspore-lab/mindrl/tree/master/mindspore_rl/algorithm/dqn/config.py). - -```python -'replay_buffer': - {'number': 1, - 'type': UniformReplayBuffer, - 'capacity': 100000, - 'data_shape': [(4,), (1,), (1,), (4,)], - 'data_type': [ms.float32, ms.int32, ms.foat32, ms.float32], - 'sample_size': 64} -``` - -Alternatively, users can use the interfaces directly to create the required data structures: - -```python -from mindspore_rl.core.uniform_replay_buffer import UniformReplayBuffer -import mindspore as ms -sample_size = 2 -capacity = 100000 -shapes = [(4,), (1,), (1,), (4,)] -types = [ms.float32, ms.int32, ms.float32, ms.float32] -replaybuffer = UniformReplayBuffer(sample_size, capacity, shapes, types) -``` - -### Using the Created UniformReplayBuffer - -Take [UniformReplayBuffer](https://github.com/mindspore-lab/mindrl/tree/master/mindspore_rl/core/uniform_replay_buffer.py) created in the form of an API to perform data manipulation as an example: - -* Insert operation - -```python -state = ms.Tensor([0.1, 0.2, 0.3, 0.4], ms.float32) -action = ms.Tensor([1], ms.int32) -reward = ms.Tensor([1], ms.float32) -new_state = ms.Tensor([0.4, 0.3, 0.2, 0.1], ms.float32) -replaybuffer.insert([state, action, reward, new_state]) -replaybuffer.insert([state, action, reward, new_state]) -``` - -* Search operation - -```python -exp = replaybuffer.get_item(0) -``` - -* Sample operation - -```python -samples = replaybuffer.sample() -``` - -* Reset operation - -```python -replaybuffer.reset() -``` - -* The size of the current buffer used - -```python -size = replaybuffer.size() -``` - -* Determine if the current buffer is full - -```python -if replaybuffer.full(): - print("Full use of this buffer.") -``` diff --git a/docs/reinforcement/docs/source_zh_cn/conf.py b/docs/reinforcement/docs/source_zh_cn/conf.py deleted file mode 100644 index a4b34476bddd12f5ad92aca67a5e56b87a0c781e..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/source_zh_cn/conf.py +++ /dev/null @@ -1,243 +0,0 @@ -# Configuration file for the Sphinx documentation builder. -# -# This file only contains a selection of the most common options. For a full -# list see the documentation: -# https://www.sphinx-doc.org/en/master/usage/configuration.html - -# -- Path setup -------------------------------------------------------------- - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# -import os -import sys -import IPython -import re -sys.path.append(os.path.abspath('../_ext')) -from sphinx.ext import autodoc as sphinx_autodoc - - -# -- Project information ----------------------------------------------------- - -project = 'MindSpore Reinforcement' -copyright = 'MindSpore' -author = 'MindSpore' - -# The full version, including alpha/beta/rc tags -release = 'master' - - -# -- General configuration --------------------------------------------------- - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -myst_enable_extensions = ["dollarmath", "amsmath"] - - -myst_heading_anchors = 5 -extensions = [ - 'sphinx.ext.autodoc', - 'sphinx.ext.doctest', - 'sphinx.ext.intersphinx', - 'sphinx.ext.todo', - 'sphinx.ext.coverage', - 'sphinx.ext.napoleon', - 'sphinx.ext.viewcode', - 'myst_parser', - 'sphinx.ext.mathjax', - 'IPython.sphinxext.ipython_console_highlighting' -] - -source_suffix = { - '.rst': 'restructuredtext', - '.md': 'markdown', -} - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This pattern also affects html_static_path and html_extra_path. -mathjax_path = 'https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/mathjax/MathJax-3.2.2/es5/tex-mml-chtml.js' - -mathjax_options = { - 'async':'async' -} - -smartquotes_action = 'De' - -exclude_patterns = [] - -pygments_style = 'sphinx' - -# -- Options for HTML output ------------------------------------------------- - -# Reconstruction of sphinx auto generated document translation. -language = 'zh_CN' -locale_dirs = ['../../../../resource/locale/'] -gettext_compact = False - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = 'sphinx_rtd_theme' - -html_search_language = 'zh' - -html_search_options = {'dict': '../../../resource/jieba.txt'} - -# Example configuration for intersphinx: refer to the Python standard library. -intersphinx_mapping = { - 'python': ('https://docs.python.org/3', '../../../../resource/python_objects.inv'), -} - -from sphinx import directives -with open('../_ext/overwriteobjectiondirective.txt', 'r', encoding="utf8") as f: - exec(f.read(), directives.__dict__) - -from sphinx.ext import viewcode -with open('../_ext/overwriteviewcode.txt', 'r', encoding="utf8") as f: - exec(f.read(), viewcode.__dict__) - -# Modify default signatures for autodoc. -autodoc_source_path = os.path.abspath(sphinx_autodoc.__file__) -autodoc_source_re = re.compile(r'stringify_signature\(.*?\)') -get_param_func_str = r"""\ -import re -import inspect as inspect_ - -def get_param_func(func): - try: - source_code = inspect_.getsource(func) - if func.__doc__: - source_code = source_code.replace(func.__doc__, '') - all_params_str = re.findall(r"def [\w_\d\-]+\(([\S\s]*?)(\):|\) ->.*?:)", source_code) - all_params = re.sub("(self|cls)(,|, )?", '', all_params_str[0][0].replace("\n", "").replace("'", "\"")) - return all_params - except: - return '' - -def get_obj(obj): - if isinstance(obj, type): - return obj.__init__ - - return obj -""" - -with open(autodoc_source_path, "r+", encoding="utf8") as f: - code_str = f.read() - code_str = autodoc_source_re.sub('"(" + get_param_func(get_obj(self.object)) + ")"', code_str, count=0) - exec(get_param_func_str, sphinx_autodoc.__dict__) - exec(code_str, sphinx_autodoc.__dict__) - -with open("../_ext/customdocumenter.txt", "r", encoding="utf8") as f: - code_str = f.read() - exec(code_str, sphinx_autodoc.__dict__) - -# Copy source files of chinese python api from reinforcement repository. -from sphinx.util import logging -import shutil -logger = logging.getLogger(__name__) - -copy_path = 'docs/api/api_python' -src_dir = os.path.join(os.getenv("RM_PATH"), copy_path) - -copy_list = [] - -present_path = os.path.dirname(__file__) - -for i in os.listdir(src_dir): - if os.path.isfile(os.path.join(src_dir,i)): - if os.path.exists('./'+i): - os.remove('./'+i) - shutil.copy(os.path.join(src_dir,i),'./'+i) - copy_list.append(os.path.join(present_path,i)) - else: - if os.path.exists('./'+i): - shutil.rmtree('./'+i) - shutil.copytree(os.path.join(src_dir,i),'./'+i) - copy_list.append(os.path.join(present_path,i)) - -# Rename .rst file to .txt file for include directive. -from rename_include import rename_include - -rename_include(present_path) - -# add view -import json - -if os.path.exists('../../../../tools/generate_html/version.json'): - with open('../../../../tools/generate_html/version.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) -elif os.path.exists('../../../../tools/generate_html/daily_dev.json'): - with open('../../../../tools/generate_html/daily_dev.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) -elif os.path.exists('../../../../tools/generate_html/daily.json'): - with open('../../../../tools/generate_html/daily.json', 'r+', encoding='utf-8') as f: - version_inf = json.load(f) - -if os.getenv("RM_PATH").split('/')[-1]: - copy_repo = os.getenv("RM_PATH").split('/')[-1] -else: - copy_repo = os.getenv("RM_PATH").split('/')[-2] - -branch = [version_inf[i]['branch'] for i in range(len(version_inf)) - if version_inf[i]['name'] == copy_repo.replace('mindrl', 'reinforcement')][0] -docs_branch = [version_inf[i]['branch'] for i in range(len(version_inf)) if version_inf[i]['name'] == 'tutorials'][0] - -re_view = f"\n.. image:: https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/{docs_branch}/" + \ - f"resource/_static/logo_github_source.svg\n :target: https://github.com/mindspore-lab/{copy_repo}/blob/{branch}/" - -for cur, _, files in os.walk(present_path): - for i in files: - flag_copy = 0 - if i.endswith('.rst'): - for j in copy_list: - if j in cur: - flag_copy = 1 - break - if os.path.join(cur, i) in copy_list or flag_copy: - try: - with open(os.path.join(cur, i), 'r+', encoding='utf-8') as f: - content = f.read() - new_content = content - if '.. include::' in content and '.. automodule::' in content: - continue - if 'autosummary::' not in content and "\n=====" in content: - re_view_ = re_view + copy_path + cur.split(present_path)[-1] + '/' + i + \ - '\n :alt: 查看源文件\n\n' - new_content = re.sub('([=]{5,})\n', r'\1\n' + re_view_, content, 1) - if new_content != content: - f.seek(0) - f.truncate() - f.write(new_content) - except Exception: - print(f'打开{i}文件失败') - -import mindspore_rl - -sys.path.append(os.path.abspath('../../../../resource/sphinx_ext')) -# import anchor_mod -import nbsphinx_mod - -sys.path.append(os.path.abspath('../../../../resource/search')) -import search_code - -sys.path.append(os.path.abspath('../../../../resource/custom_directives')) -from custom_directives import IncludeCodeDirective - -def setup(app): - app.add_directive('includecode', IncludeCodeDirective) - -src_release = os.path.join(os.getenv("RM_PATH"), 'RELEASE_CN.md') -des_release = "./RELEASE.md" -with open(src_release, "r", encoding="utf-8") as f: - data = f.read() -if len(re.findall("\n## (.*?)\n",data)) > 1: - content = re.findall("(## [\s\S\n]*?)\n## ", data) -else: - content = re.findall("(## [\s\S\n]*)", data) -#result = content[0].replace('# MindSpore', '#', 1) -with open(des_release, "w", encoding="utf-8") as p: - p.write("# Release Notes"+"\n\n") - p.write(content[0]) \ No newline at end of file diff --git a/docs/reinforcement/docs/source_zh_cn/custom_config_info.md b/docs/reinforcement/docs/source_zh_cn/custom_config_info.md deleted file mode 100644 index df27d9b33fb09cedd65c7a19ecb14708ee02f6cf..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/source_zh_cn/custom_config_info.md +++ /dev/null @@ -1,194 +0,0 @@ -# 强化学习配置说明 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/reinforcement/docs/source_zh_cn/custom_config_info.md) -   - -## 概述 - -深度强化学习作为当前发展最快的方向之一,新算法层出不穷。MindSpore Reinforcement将强化学习算法建模为Actor、Learner、Policy、Environment、ReplayBuffer等对象,从而提供易扩展、高重用的强化学习框架。与此同时,深度强化学习算法相对复杂,网络训练效果受到众多参数影响,MindSpore Reinforcement提供了集中的参数配置接口,将算法实现和部署细节进行分离,同时便于用户快速调整模型算法。 - -本文以DQN算法为例介绍如何使用MindSpore Reinforcement算法和训练参数配置接口,帮助用户快速定制和调整强化学习算法。 - -您可以从[https://github.com/mindspore-lab/mindrl/tree/master/example/dqn](https://github.com/mindspore-lab/mindrl/tree/master/example/dqn)获取DQN算法代码。 - -## 算法相关参数配置 - -MindSpore-RL使用`algorithm_config`定义逻辑组件和相应的超参配置。`algorithm_config`是一个Python字典,分别描述actor、learner、policy_and_network、collect_environment、eval_environment和replaybuffer。框架可以基于配置执行算法,用户仅需聚焦算法设计。 - -下述代码定义了一组算法配置,并使用algorithm_config创建`Session`,`Session`负责分配资源并执行计算图编译和执行。 - -```python -from mindspore_rl.mindspore_rl import Session -algorithm_config = { - 'actor': {...}, - 'learner': {...}, - 'policy_and_network': {...}, - 'collect_environment': {...}, - 'eval_environment': {...}, - 'replay_buffer': {...} -} - -session = Session(algorithm_config) -session.run(...) -``` - -下文将详细介绍algorithm_config中各个参数含义及使用方式。 - -### Policy配置参数 - -Policy通常用于智能体决策下一步需要执行的行为,算法中需要policy类型名`type`和参数`params`: - -- `type`:指定Policy的类型,Actor通过Policy决策应该采取的动作。在深度强化学习中,Policy通常采用深度神经网络提取环境特征,并输出下一步采取的动作。 -- `params`:指定实例化相应Policy的参数。这里需要注意的是,`params`和`type`需要匹配。 - -以下样例中定义策略和参数配置,Policy是由用户定义的`DQNPolicy`,并指定了epsilon greedy衰减参数,学习率,网络模型隐层等参数,框架会采用`DQNPolicy(policy_params)`方式创建Policy对象。 - -```python -from dqn.src.dqn import DQNPolicy - -policy_params = { - 'epsi_high': 0.1, # epsi_high/epsi_low/decay共同控制探索-利用比例 - 'epsi_low': 0.1, # epsi_high:最大探索比例,epsi_low:最低探索比例,decay:衰减步长 - 'decay': 200, - 'state_space_dim': 0, # 状态空间维度大小,0表示从外部环境中读取状态空间信息 - 'action_space_dim': 0, # 动作空间维度大小,0表示从外部环境中获取动作空间信息 - 'hidden_size': 100, # 隐层维度 -} - -algorithm_config = { - ... - 'policy_and_network': { - 'type': DQNPolicy, - 'params': policy_params, - }, - ... -} -``` - -| 键值 | 类型 | 范围 | 说明 | -| :----: | :----------------: | :---------------------------: | :----------------------------------------------------------: | -| type | Class | 用户定义的继承learner并实现虚函数的类 | 和用户定义的继承learner并实现虚函数的类名相同 | -| params(可选) | Dictionary | 任意key value形式的值或者None | 自定义参数,用户可以通过key value的形式传入任何值 | - -### Environment配置参数 - -`collect_environment`和`eval_environment`分别表示运行过程中收集数据的环境和用来评估模型的环境,算法中需要指定类型名`number`,`type`和参数`params`: - -- `number`:在算法中所需要的环境数量。 - -- `type`:指定环境的类型名,这里可以是Reinforcement内置的环境,例如`Environment`,也可以是用户自定义的环境类型。 - -- `params`:指定实例化相应外部环境的参数。需要注意的是,`params`和`type`需要匹配。 - -以下样例中定义了外部环境配置,框架会采用`Environment(name='CartPole-v0')`方式创建`CartPole-v0`外部环境。`collect_environment`和`eval_environment`的配置参数是一样的。 - -```python -from mindspore_rl.environment import GymEnvironment -collect_env_params = {'name': 'CartPole-v0'} -eval_env_params = {'name': 'CartPole-v0'} -algorithm_config = { - ... - 'collect_environment': { - 'number': 1, - 'type': GymEnvironment, # 外部环境类名 - 'params': collect_env_params # 环境参数 - }, - 'eval_environment': { - 'number': 1, - 'type': GymEnvironment, # 外部环境类名 - 'params': eval_env_params # 环境参数 - }, - ... -} -``` - -| 键值 | 类型 | 范围 | 说明 | -| :----------------: | :--------: | :---------------------------: | :----------------------------------------------------------: | -| number(可选) | Integer | [1, +∞) | 当用户选择填写number这项时,填入的环境数量至少为1个。当用户不选择填入number这项时,框架会直接创建环境实例而不会调用`MultiEnvironmentWrapper`类来包装环境 | -| num_parallel(可选) | Integer | [1, number] | 不填时默认开启环境并行。用户可通过填写num_parallel: 1来关闭环境并行,或者配置自己需要的并行参数。 | -| type | Class | Environment类的子类 | 外部环境类名 | -| params(可选) | Dictionary | 任意key value形式的值或者None | 自定义参数,用户可以通过key value的形式传入任何值 | - -### Actor配置参数 - -`Actor`负责与外部环境交互。通常`Actor`需要基于`Policy` 与`Env`交互,部分算法中还会将交互得到的经验存入`ReplayBuffer`中,因此`Actor`会持有`Policy`和`Environment`,并且按需创建`ReplayBuffer`。`Actor配置参数`中,`policies/networks`指定`Policy`中的成员对象名称。 - -以下代码中定义`DQNActor`配置,框架会采用`DQNActor(algorithm_config['actor'])`方式创建Actor。 - -```python -algorithm_config = { - ... - 'actor': { - 'number': 1, # Actor个数 - 'type': DQNActor, # Actor类名 - 'policies': ['init_policy', 'collect_policy', 'eval_policy'], # 从Policy中提取名为init_policy/collect_policy/eval_policy成员对象,用于构建Actor - 'share_env': True # 每个actor是否共享环境 - } - ... -} -``` - -| 键值 | 类型 | 范围 | 说明 | -| :--------------: | :------------: | :---------------------------------: | :----------------------------------------------------------: | -| number | Integer | [1, +∞) | 目前actor数量暂时不支持1以外的数值 | -| type | Class | 用户定义的继承actor并实现虚函数的类 | 和用户定义的继承actor并实现虚函数的类名相同 | -| params(可选) | Dictionary | 任意key value形式的值或者None | 自定义参数,用户可以通过key value的形式传入任何值 | -| policies | List of String | 和用户定义的策略变量名相同 | 列表中的所有String都应该和用户定义的策略类中初始化的策略变量名一一对应 | -| networks(可选) | List of String | 和用户定义的网络变量名相同 | 列表中的所有String都应该和用户定义的策略类中初始化的网络变量名一一对应 | -| share_env(可选) | Boolean | True 或 False | 默认值为True, 即各个actor共享一个环境。如果为False, 则单独为每个actor创建一个collect环境实例 | - -### ReplayBuffer配置参数 - -在部分算法中,`ReplayBuffer`用于储存Actor和环境交互的经验。之后会从`ReplayBuffer`中取出数据,用于网络训练。 - -```python -from mindspore_rl.core.replay_buffer import ReplayBuffer -algorithm_config = { - ... - 'replay_buffer': {'number': 1, - 'type': ReplayBuffer, - 'capacity': 100000, # ReplayBuffer容量 - 'sample_size': 64, # 采样Batch Size - 'data_shape': [(4,), (1,), (1,), (4,)], # ReplayBuffer的维度信息 - 'data_type': [ms.float32, ms.int32, ms.float32, ms.float32]}, # ReplayBuffer数据类型 -} -``` - -| 键值 | 类型 | 范围 | 说明 | -| :---------------: | :-------------------------: | :----------------------------------: | :---------------------------------------------------: | -| number | Integer | [1, +∞) | 需要的Buffer数量 | -| type | Class | 用户定义或者框架提供的ReplayBuffer类 | 用户定义或者框架提供的ReplayBuffer的类名相同 | -| capacity | Integer | [0, +∞) | ReplayBuffer容量 | -| data_shape | List of Integer Tuple | [0, +∞) | Tuple中的第一个值需要和环境数量相等,如是单环境则不填 | -| data_type | List of mindspore data type | 需要是MindSpore的数据类型 | data_type的长度和data_shape的长度相同 | -| sample_size(可选) | Integer | [0, capacity] | 值必须小于capacity。不填时,默认为1 | - -### Learner配置参数 - -`Learner`负责基于历史经验对网络权重进行更新。`Learner`中持有`Policy`中定义的DNN网络(由`networks`指定`Policy`的成员对象名称),用于损失函数计算和网络权重更新。 - -以下代码中定义`DQNLearner`配置,框架会采用`DQNLearner(algorithm_config['learner'])`方式创建Learner。 - -```python -from dqn.src.dqn import DQNLearner -learner_params = {'gamma': 0.99, - 'lr': 0.001, # 学习率 - } -algorithm_config = { - ... - 'learner': { - 'number': 1, # Learner个数 - 'type': DQNLearner, # Learner类名 - 'params': learner_params, # Learner 需要的参数 - 'networks': ['policy_network', 'target_network'] # Learner从Policy中提取名为target_net/policy_network成员对象,用于更新 - }, - ... -} -``` - -| 键值 | 类型 | 范围 | 说明 | -| :----------: | :------------: | :-----------------------------------: | :----------------------------------------------------------: | -| number | Integer | [1, +∞) | 目前learner数量暂时不支持1以外的数值 | -| type | Class | 用户定义的继承learner并实现虚函数的类 | 和用户定义的继承learner并实现虚函数的类名相同 | -| params(可选) | Dictionary | 任意key value形式的值或者None | 自定义参数,用户可以通过key value的形式传入任何值 | -| networks | List of String | 和定义的网络名变量相同 | 列表中的所有String都应该和用户定义的策略类中初始化的网络变量名一一对应 | diff --git a/docs/reinforcement/docs/source_zh_cn/dqn.md b/docs/reinforcement/docs/source_zh_cn/dqn.md deleted file mode 100644 index f6597a300f3f18864459ab4b4312e18637946e96..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/source_zh_cn/dqn.md +++ /dev/null @@ -1,411 +0,0 @@ -# 使用MindSpore Reinforcement实现深度Q学习(DQN) - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/reinforcement/docs/source_zh_cn/dqn.md) -   - -## 摘要 - -为了使用MindSpore Reinforcement实现强化学习算法,用户需要: - -- 提供算法配置,将算法的实现与其部署细节分开; -- 基于Actor-Learner-Environment抽象实现算法; -- 创建一个执行已实现的算法的会话对象。 - -本教程展示了使用MindSpore Reinforcement API实现深度Q学习(DQN)算法。注:为保证清晰性和可读性,仅显示与API相关的代码,不相关的代码已省略。点击[此处](https://github.com/mindspore-lab/mindrl/tree/master/example/dqn)获取MindSpore Reinforcement实现完整DQN的源代码。 - -## 指定DQN的Actor-Learner-Environment抽象 - -DQN算法需要两个深度神经网络,一个*策略网络*用于近似动作值函数(Q函数),另一个*目标网络*用于稳定训练。策略网络指如何对环境采取行动的策略,DQN算法的目标是训练策略网络以获得最大的奖励。此外,DQN算法使用*经验回放*技术来维护先前的观察结果,进行off-policy学习。其中Actor使用不同的行为策略来对环境采取行动。 - -MindSpore Reinforcement使用*算法配置*指定DQN算法所需的逻辑组件(Actor、Learner、Policy and Network、Collect Environment、Eval Environment、Replayuffer)和关联的超参数。根据提供的配置,它使用不同的策略执行算法,以便用户可以专注于算法设计。 - -算法配置是一个Python字典,指定如何构造DQN算法的不同组件。每个组件的超参数在单独的Python字典中配置。DQN算法配置定义如下: - -```python -algorithm_config = { - 'actor': { - 'number': 1, # Actor实例的数量 - 'type': DQNActor, # 需要创建的Actor类 - 'policies': ['init_policy', 'collect_policy', 'evaluate_policy'], # Actor需要用到的选择动作的策略 - }, - 'learner': { - 'number': 1, # Learner实例的数量 - 'type': DQNLearner, # 需要创建的Learner类 - 'params': learner_params, # Learner需要用到的参数 - 'networks': ['policy_network', 'target_network'] # Learner中需要用到的网络 - }, - 'policy_and_network': { - 'type': DQNPolicy, # 需要创建的Policy类 - 'params': policy_params # Policy中需要用到的参数 - }, - 'collect_environment': { - 'number': 1, # Collect Environment实例的数量 - 'type': GymEnvironment, # 需要创建的Collect Environment类 - 'params': collect_env_params # Collect Environment中需要用到的参数 - }, - 'eval_environment': { - 'number': 1, # 同Collect Environment - 'type': GymEnvironment, - 'params': eval_env_params - }, - 'replay_buffer': {'number': 1, # ReplayBuffer实例的数量 - 'type': ReplayBuffer, # 需要创建的ReplayBuffer类 - 'capacity': 100000, # ReplayBuffer大小 - 'data_shape': [(4,), (1,), (1,), (4,)], # ReplayBuffer中的数据Shape - 'data_type': [ms.float32, ms.int32, ms.float32, ms.float32], # ReplayBuffer中的数据Type - 'sample_size': 64}, # ReplayBuffer单次采样的数据量 -} -``` - -以上配置定义了六个顶层项,每个配置对应一个算法组件:*actor、learner、policy*、*replaybuffer*和两个*environment*。每个项对应一个类,该类必须由用户定义或者使用MIndSpore Reinforcement提供的组件,以实现DQN算法的逻辑。 - -顶层项具有描述组件的子项。*number*定义算法使用的组件的实例数。*type*表示必须定义的Python类的名称,用于实现组件。*params*为组件提供必要的超参数。*actor*中的*policies*定义组件使用的策略。*learner*中的*networks*列出了此组件使用的所有神经网络。在DQN示例中,只有Actor与环境交互。*replay_buffer*定义回放缓冲区的*容量、形状、样本大小和数据类型*。 - -对于DQN算法,我们配置了一个Actor `'number': 1`,它的Python类`'type': DQNActor`,以及三个行为策略`'policies': ['init_policy', 'collect_policy', 'evaluate_policy']`。 - -其他组件也以类似的方式定义。有关更多详细信息,请参阅[完整代码示例](https://github.com/mindspore-lab/mindrl/tree/master/example/dqn)和[API](https://www.mindspore.cn/reinforcement/docs/zh-CN/master/reinforcement.html)。 - -请注意,MindSpore Reinforcement使用单个*policy*类来定义算法使用的所有策略和神经网络。通过这种方式,它隐藏了策略和神经网络之间数据共享和通信的复杂性。 - -在train.py文件中,需要通过调用MindSpore Reinforcement的*session*来执行算法。*Session*在一台或多台群集计算机上分配资源并执行编译后的计算图。用户传入算法配置以实例化Session类: - -```python -from mindspore_rl.core import Session -dqn_session = Session(dqn_algorithm_config) -``` - -调用Session对象上的run方法,并传入对应的参数来执行DQN算法。其中*class_type*是我们定义的Trainer类在这里是DQNTrainer(后面会介绍如何实现Trainer类),episode为需要运行的循环次数,params为在config文件中定义的trainer所需要用到的参数具体可查看完整代码中*config.py*的内容,callbacks定义了需要用到的统计方法等具体请参考API中的Callback相关内容。 - -```python -from src.dqn_trainer import DQNTrainer -from mindspore_rl.utils.callback import CheckpointCallback, LossCallback, EvaluateCallback -loss_cb = LossCallback() -ckpt_cb = CheckpointCallback(50, config.trainer_params['ckpt_path']) -eval_cb = EvaluateCallback(10) -cbs = [loss_cb, ckpt_cb, eval_cb] -dqn_session.run(class_type=DQNTrainer, episode=episode, params=config.trainer_params, callbacks=cbs) -``` - -为使用MindSpore的计算图功能,将执行模式设置为`GRAPH_MODE`。 - -```python -import mindspore as ms -ms.set_context(mode=ms.GRAPH_MODE) -``` - -`@jit`修饰的函数和方法将会编译到MindSpore计算图用于自动并行和加速。在本教程中,我们使用此功能来实现一个高效的`DQNTrainer`类。 - -### 定义DQNTrainer类 - -`DQNTrainer`类表示算法的流程编排,主要流程为循环迭代地与环境交互将经验内存入*ReplayBuffer*中,然后从*ReplayBuffer*获取经验并训练目标模型。它必须继承自`Trainer`类,该类是MindSpore Reinforcement API的一部分。 - -`Trainer`基类包含`MSRL`(MindSpore Reinforcement)对象,该对象允许算法实现与MindSpore Reinforcement交互,以实现训练逻辑。`MSRL`类根据先前定义的算法配置实例化RL算法组件。它提供了函数处理程序,这些处理程序透明地绑定到用户定义的Actor、Learner或ReplayBuffer的方法。因此,`MSRL`类让用户能够专注于算法逻辑,同时它透明地处理一个或多个worker上不同算法组件之间的对象创建、数据共享和通信。用户通过使用算法配置创建上文提到的`Session`对象来实例化`MSRL`对象。 - -`DQNTrainer`必须重载`train_one_episode`用于训练,`evaluate`用于评估以及`trainable_variable`用于保存断点。在本教程中,它的定义如下: - -```python -class DQNTrainer(Trainer): - def __init__(self, msrl, params): - ... - super(DQNTrainer, self).__init__(msrl) - - def trainable_variables(self): - """Trainable variables for saving.""" - trainable_variables = {"policy_net": self.msrl.learner.policy_network} - return trainable_variables - - @ms.jit - def init_training(self): - """Initialize training""" - state = self.msrl.collect_environment.reset() - done = self.false - i = self.zero_value - while self.less(i, self.fill_value): - done, _, new_state, action, my_reward = self.msrl.agent_act( - trainer.INIT, state) - self.msrl.replay_buffer_insert( - [state, action, my_reward, new_state]) - state = new_state - if done: - state = self.msrl.collect_environment.reset() - done = self.false - i += 1 - return done - - @ms.jit - def evaluate(self): - """Policy evaluate""" - total_reward = self.zero_value - eval_iter = self.zero_value - while self.less(eval_iter, self.num_evaluate_episode): - episode_reward = self.zero_value - state = self.msrl.eval_environment.reset() - done = self.false - while not done: - done, r, state = self.msrl.agent_act(trainer.EVAL, state) - r = self.squeeze(r) - episode_reward += r - total_reward += episode_reward - eval_iter += 1 - avg_reward = total_reward / self.num_evaluate_episode - return avg_reward -``` - -用户调用`train`方法会调用Trainer基类的`train`。然后,为它指定数量的episode(iteration)训练模型,每个episode调用用户定义的`train_one_episode`方法。最后,train方法通过调用`evaluate`方法来评估策略以获得奖励值。 - -在训练循环的每次迭代中,调用`train_one_episode`方法来训练一个episode: - -```python -@ms.jit -def train_one_episode(self): - """Train one episode""" - if not self.inited: - self.init_training() - self.inited = self.true - state = self.msrl.collect_environment.reset() - done = self.false - total_reward = self.zero - steps = self.zero - loss = self.zero - while not done: - done, r, new_state, action, my_reward = self.msrl.agent_act( - trainer.COLLECT, state) - self.msrl.replay_buffer_insert( - [state, action, my_reward, new_state]) - state = new_state - r = self.squeeze(r) - loss = self.msrl.agent_learn(self.msrl.replay_buffer_sample()) - total_reward += r - steps += 1 - if not self.mod(steps, self.update_period): - self.msrl.learner.update() - return loss, total_reward, steps -``` - -`@jit`注解表示此方法将被编译为MindSpore计算图用于加速。所有标量值都必须定义为张量类型,例如`self.zero_value = Tensor(0, mindspore.float32)`。 - -`train_one_episode`方法首先调用环境的`reset`方法,`self.msrl.collect_environment.reset()`函数来重置环境。然后,它使用`self.msrl.agent_act`函数处理程序从环境中收集经验,并通过`self.msrl.replay_buffer_insert`把经验存入到回放缓存中。在收集完经验后,使用`msrl.agent_learn`函数训练目标模型。`self.msrl.agent_learn`的输入是`self.msrl.replay_buffer_sample`返回的采样结果。 - -回放缓存`ReplayBuffer`由MindSpore Reinfocement提供。它定义了`insert`和`sample`方法,分别用于对经验数据进行存储和采样。详细信息,请参阅[完整的DQN代码示例](https://github.com/mindspore-lab/mindrl/tree/master/example/dqn)。 - -### 定义DQNPolicy类 - -定义`DQNPolicy`类,用于实现神经网络并定义策略。 - -```python -class DQNPolicy(): - def __init__(self, params): - self.policy_network = FullyConnectedNet( - params['state_space_dim'], - params['hidden_size'], - params['action_space_dim'], - params['compute_type']) - self.target_network = FullyConnectedNet( - params['state_space_dim'], - params['hidden_size'], - params['action_space_dim'], - params['compute_type']) -``` - -构造函数将先前在config.py中定义的Python字典类型的超参数`policy_params`作为输入。 - -在定义策略网络和目标网络之前,用户必须使用MindSpore算子定义神经网络的结构。例如,它们可能是`FullyConnectedNetwork`类的对象,该类定义如下: - -```python -class FullyConnectedNetwork(mindspore.nn.Cell): - def __init__(self, input_size, hidden_size, output_size, compute_type=mstype.float32): - super(FullyConnectedNet, self).__init__() - self.linear1 = nn.Dense( - input_size, - hidden_size, - weight_init="XavierUniform").to_float(compute_type) - self.linear2 = nn.Dense( - hidden_size, - output_size, - weight_init="XavierUniform").to_float(compute_type) - self.relu = nn.ReLU() -``` - -DQN算法使用损失函数来优化神经网络的权重。此时,用户必须定义一个用于计算损失函数的神经网络。此网络被指定为`DQNLearner`的嵌套类。此外,还需要优化器来训练网络。优化器和损失函数定义如下: - -```python -class DQNLearner(Learner): - """DQN Learner""" - - class PolicyNetWithLossCell(nn.Cell): - """DQN policy network with loss cell""" - - def __init__(self, backbone, loss_fn): - super(DQNLearner.PolicyNetWithLossCell, - self).__init__(auto_prefix=False) - self._backbone = backbone - self._loss_fn = loss_fn - self.gather = P.GatherD() - - def construct(self, x, a0, label): - """constructor for Loss Cell""" - out = self._backbone(x) - out = self.gather(out, 1, a0) - loss = self._loss_fn(out, label) - return loss - def __init__(self, params=None): - super(DQNLearner, self).__init__() - ... - optimizer = nn.Adam( - self.policy_network.trainable_params(), - learning_rate=params['lr']) - loss_fn = nn.MSELoss() - loss_q_net = self.PolicyNetWithLossCell(self.policy_network, loss_fn) - self.policy_network_train = nn.TrainOneStepCell(loss_q_net, optimizer) - self.policy_network_train.set_train(mode=True) - ... -``` - -DQN算法是一种*off-policy*算法,使用epsilon-贪婪策略学习。它使用不同的行为策略来对环境采取行动和收集数据。在本示例中,我们用`RandomPolicy`初始化训练,用`EpsilonGreedyPolicy`收集训练期间的经验,用`GreedyPolicy`进行评估: - -```python -class DQNPolicy(): - def __init__(self, params): - ... - self.init_policy = RandomPolicy(params['action_space_dim']) - self.collect_policy = EpsilonGreedyPolicy(self.policy_network, (1, 1), params['epsi_high'], - params['epsi_low'], params['decay'], params['action_space_dim']) - self.evaluate_policy = GreedyPolicy(self.policy_network) -``` - -由于上述三种行为策略在一系列RL算法中非常常见,MindSpore Reinforcement将它们作为可重用的构建块提供。用户还可以自定义特定算法的行为策略。 - -请注意,参数字典的方法名称和键必须与前面定义的算法配置一致。 - -### 定义DQNActor类 - -定义一个新的Actor组件用于实现`DQNActor`,该组件继承了MindSpore Reinforcement提供的`Actor`类。然后,必须重载Actor中的方法: - -```python -class DQNActor(Actor): - ... - def act(self, phase, params): - if phase == 1: - # Fill the replay buffer - action = self.init_policy() - new_state, reward, done = self._environment.step(action) - action = self.reshape(action, (1,)) - my_reward = self.select(done, self.penalty, self.reward) - return done, reward, new_state, action, my_reward - if phase == 2: - # Experience collection - self.step += 1 - ts0 = self.expand_dims(params, 0) - step_tensor = self.ones((1, 1), ms.float32) * self.step - - action = self.collect_policy(ts0, step_tensor) - new_state, reward, done = self._environment.step(action) - action = self.reshape(action, (1,)) - my_reward = self.select(done, self.penalty, self.reward) - return done, reward, new_state, action, my_reward - if phase == 3: - # Evaluate the trained policy - ts0 = self.expand_dims(params, 0) - action = self.evaluate_policy(ts0) - new_state, reward, done = self._eval_env.step(action) - return done, reward, new_state - self.print("Phase is incorrect") - return 0 -``` - -这三种方法使用不同的策略作用于指定的环境,这些策略将状态映射到操作。这些方法将张量类型的值作为输入,并从环境返回轨迹。 - -为了与环境交互,Actor使用`Environment`类中定义的`step(action)`方法。对于应用到指定环境的操作,此方法会做出反应并返回三元组。三元组包括应用上一个操作后的新状态、作为浮点类型获得的奖励以及用于终止episode和重置环境的布尔标志。 - -回放缓冲区类`ReplayBuffer`定义了一个`insert`方法,`DQNActor`对象调用该方法将经验数据存储在回放缓冲区中。 - -`Environment`类和`ReplayBuffer`类由MindSpore Reinforcement API提供。 - -`DQNActor`类的构造函数定义了环境、回放缓冲区、策略和网络。它将字典类型的参数作为输入,这些参数在算法配置中定义。下面,我们只展示环境的初始化,其他属性以类似的方式分配: - -```python -class DQNActor(Actor): - def __init__(self, params): - self._environment = params['collect_environment'] - self._eval_env = params['eval_environment'] - ... -``` - -### 定义DQNLearner类 - -为了实现`DQNLearner`,类必须继承MindSpore Reinforcement API中的`Learner`类,并重载`learn`方法: - -```python -class DQNLearner(Learner): - ... - def learn(self, experience): - """Model update""" - s0, a0, r1, s1 = experience - next_state_values = self.target_network(s1) - next_state_values = next_state_values.max(axis=1) - r1 = self.reshape(r1, (-1,)) - - y_true = r1 + self.gamma * next_state_values - - # Modify last step reward - one = self.ones_like(r1) - y_true = self.select(r1 == -one, one, y_true) - y_true = self.expand_dims(y_true, 1) - - success = self.policy_network_train(s0, a0, y_true) - return success -``` - -在这里,`learn`方法将轨迹(从回放缓冲区采样)作为输入来训练策略网络。构造函数通过从算法配置接收字典类型的配置,将网络、策略和折扣率分配给DQNLearner: - -```python -class DQNLearner(Learner): - def __init__(self, params=None): - super(DQNLearner, self).__init__() - self.policy_network = params['policy_network'] - self.target_network = params['target_network'] -``` - -## 执行并查看结果 - -执行脚本`train.py`以启动DQN模型训练。 - -```python -cd example/dqn/ -python train.py -``` - -执行结果如下: - -```text ------------------------------------------ -Evaluation result in episode 0 is 95.300 ------------------------------------------ -Episode 0, steps: 33.0, reward: 33.000 -Episode 1, steps: 45.0, reward: 12.000 -Episode 2, steps: 54.0, reward: 9.000 -Episode 3, steps: 64.0, reward: 10.000 -Episode 4, steps: 73.0, reward: 9.000 -Episode 5, steps: 82.0, reward: 9.000 -Episode 6, steps: 91.0, reward: 9.000 -Episode 7, steps: 100.0, reward: 9.000 -Episode 8, steps: 109.0, reward: 9.000 -Episode 9, steps: 118.0, reward: 9.000 -... -... -Episode 200, steps: 25540.0, reward: 200.000 -Episode 201, steps: 25740.0, reward: 200.000 -Episode 202, steps: 25940.0, reward: 200.000 -Episode 203, steps: 26140.0, reward: 200.000 -Episode 204, steps: 26340.0, reward: 200.000 -Episode 205, steps: 26518.0, reward: 178.000 -Episode 206, steps: 26718.0, reward: 200.000 -Episode 207, steps: 26890.0, reward: 172.000 -Episode 208, steps: 27090.0, reward: 200.000 -Episode 209, steps: 27290.0, reward: 200.000 ------------------------------------------ -Evaluation result in episode 210 is 200.000 ------------------------------------------ -``` - -![CartPole](images/cartpole.gif) diff --git a/docs/reinforcement/docs/source_zh_cn/environment.md b/docs/reinforcement/docs/source_zh_cn/environment.md deleted file mode 100644 index 6e2242f8122ba10fc2a3f9ac021af6a42bfa8bf4..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/source_zh_cn/environment.md +++ /dev/null @@ -1,80 +0,0 @@ -# 强化学习环境接入 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/reinforcement/docs/source_zh_cn/environment.md) - -## 概述 - -强化学习领域中,智能体与环境交互过程中,学习策略来使得数值化的收益信号最大化。“环境”作为待解决的问题,是强化学习领域中重要的要素。 - -目前强化学习使用的环境种类繁多:[Mujoco](https://github.com/deepmind/mujoco)、[MPE](https://github.com/openai/multiagent-particle-envs)、[Atari](https://github.com/gsurma/atari)、[PySC2](https://www.github.com/deepmind/pysc2)、[SMAC](https://github/oxwhirl/smac)、[TORCS](https://github.com/ugo-nama-kun/gym_torcs)、[Isaac](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs)等,目前MindSpore Reinforcement接入了Gym、SMAC两个环境,后续随着算法的丰富,还会逐渐接入更多的环境。本文将介绍如何在MindSpore Reinforcement下接入第三方环境。 - -## 将环境Python函数封装为算子 - -在此之前,先介绍一下静态图和动态图模式。 - -- 动态图模式下,程序按照代码的编写顺序逐行执行,编译器将神经网络中的各个算子逐一下发到设备进行计算操作,方便用户编写和调试神经网络模型。 - -- 静态图模式下,程序在编译执行时,会将开发者定义的算法编译成一张计算图。在这个过程中,编译器可以通过使用图优化技术来降低资源开销,获得更好的执行性能。 - -由于静态图模式支持的语法是Python语言的子集,而常用的环境一般使用Python接口实现交互,二者之间的语法差异往往会造成图编译错误。对于这个问题,开发者可以使用`PyFunc`算子将Python函数封装为一个MindSpore计算图中的算子。 - -接下来以gym为例,将`env.reset()`封装为一个MindSpore计算图中的算子: - -下面的代码中创建了一个`CartPole-v0`的环境,执行`env.reset()`方法,可以看到`state`的类型是`numpy.ndarray`,数据类型和维度分别是`np.float64`和`(4,)`。 - -```python -import gym - -env = gym.make('CartPole-v0') -state = env.reset() -print('type: {}, shape: {}, dtype: {}'.format(type(state), state.dtype, state.shape)) - -# Result: -# type: , shape: (4,), dtype: float64 -``` - -接下来,使用`PyFunc`算子将`env.reset()`封装为一个MindSpore算子: - -- `fn`指定需要封装的Python函数名,既可以是普通的函数,也可以是成员函数。 -- `in_types`和`in_shapes`指定输入的数据类型和维度。`env.reset`没有入参,因此填写空的列表。 -- `out_types`,`out_shapes`指定返回值的数据类型和维度。从之前的执行结果可以看到,`env.reset()`返回值是一个numpy数组,数据类型和维度分别是`np.float64`和`(4,)`,因此填写`[ms.float64,]`和`[(4,),]`。 -- `PyFunc`返回值是个tuple(Tensor)。 -- 更加详细的使用说明[参考](https://gitee.com/mindspore/mindspore/blob/master/mindspore/python/mindspore/ops/operations/other_ops.py)。 - -## 环境和算法解耦 - -强化学习算法通常应该具备良好的泛化性,例如解决`HalfCheetah`的算法也应该能够解决`Pendulum`。为了贯彻泛化性的要求,有必要将环境和算法其余部分进行解耦,从而确保在更换环境后,脚本中的其余部分尽量少的修改。建议开发者参考`Environment`对环境进行封装。 - -```python -class Environment(nn.Cell): - def __init__(self): - super(Environment, self).__init__(auto_prefix=False) - - def reset(self): - pass - - def step(self, action): - pass - - @property - def action_space(self) -> Space: - pass - - @property - def observation_space(self) -> Space: - pass - - @property - def reward_space(self) -> Space: - pass - - @property - def done_space(self) -> Space: - pass -``` - -`Environment`除了提供`reset`和`step`等与环境交互的接口之外,还需要提供`action_space`、`observation_space`等方法,这些接口返回[Space](https://mindspore.cn/reinforcement/docs/zh-CN/master/reinforcement.html#mindspore_rl.environment.Space)类型。算法可以根据`Space`信息: - -- 获取环境的状态空间和动作空间的维度,用于构建神经网络。 -- 读取合法的动作范围,对策略网络给出的动作进行缩放和裁剪。 -- 识别环境的动作空间是离散的还是连续的,选择采用连续分布还是离散分布对环境探索。 diff --git a/docs/reinforcement/docs/source_zh_cn/images/cartpole.gif b/docs/reinforcement/docs/source_zh_cn/images/cartpole.gif deleted file mode 100644 index 48bad3f540b81c56c8ebe07881421aaeb803d19f..0000000000000000000000000000000000000000 Binary files a/docs/reinforcement/docs/source_zh_cn/images/cartpole.gif and /dev/null differ diff --git a/docs/reinforcement/docs/source_zh_cn/images/get.png b/docs/reinforcement/docs/source_zh_cn/images/get.png deleted file mode 100644 index 29ff4f29177460cde5b818a8cf1ad13ab379c152..0000000000000000000000000000000000000000 Binary files a/docs/reinforcement/docs/source_zh_cn/images/get.png and /dev/null differ diff --git a/docs/reinforcement/docs/source_zh_cn/images/insert.png b/docs/reinforcement/docs/source_zh_cn/images/insert.png deleted file mode 100644 index ad602bba69acca0b60a8f9e7bf1472d137593d62..0000000000000000000000000000000000000000 Binary files a/docs/reinforcement/docs/source_zh_cn/images/insert.png and /dev/null differ diff --git a/docs/reinforcement/docs/source_zh_cn/images/mindspore_rl_architecture.png b/docs/reinforcement/docs/source_zh_cn/images/mindspore_rl_architecture.png deleted file mode 100644 index eb50b331d16c95bd031f0714e742db0f5b1f9e26..0000000000000000000000000000000000000000 Binary files a/docs/reinforcement/docs/source_zh_cn/images/mindspore_rl_architecture.png and /dev/null differ diff --git a/docs/reinforcement/docs/source_zh_cn/images/sample.png b/docs/reinforcement/docs/source_zh_cn/images/sample.png deleted file mode 100644 index de7799346464fae8e85f15e7241fade0da4f0ac9..0000000000000000000000000000000000000000 Binary files a/docs/reinforcement/docs/source_zh_cn/images/sample.png and /dev/null differ diff --git a/docs/reinforcement/docs/source_zh_cn/index.rst b/docs/reinforcement/docs/source_zh_cn/index.rst deleted file mode 100644 index 46057bdf1d63b7604dd9bfa47b0ab8d5aa2c3c4c..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/source_zh_cn/index.rst +++ /dev/null @@ -1,69 +0,0 @@ -MindSpore Reinforcement 文档 -============================= - -MindSpore Reinforcement是一款强化学习套件,支持使用强化学习算法对agent进行分布式训练。 - -MindSpore Reinforcement为编写强化学习算法提供了简洁的API抽象,它将算法与具体的部署和执行过程解耦,包括加速器的使用、并行度以及跨节点的计算调度。MindSpore Reinforcement将强化学习算法转换为一系列编译后的计算图,然后由MindSpore框架在CPU、GPU或AscendAI处理器上高效运行。 - -.. raw:: html - - - -代码仓地址: - -设计特点 --------- - -1. 提供以算法为中心的API,用于编写强化学习算法 - - 在MindSpore Reinforcement中,用户使用直观的算法概念(如agent、actor、environment、learner)来描述由Python表达的强化学习算法。Agent包含与环境交互并收集奖励的actor。根据奖励,learner更新用于控制actor行为的策略。用户可以专注于算法的实现,而不用关注框架的计算细节。 - -2. 将强化学习算法与其执行策略解耦 - - MindSpore Reinforcement提供的用于算法实现的API并没有假设算法如何被执行。因此,MindSpore Reinforcement可以在单GPU的笔记本电脑和多GPU的计算机集群上执行相同的算法。用户提供了单独的执行配置,该配置描述了MindSpore Reinforcement可以用于训练的资源。 - -3. 高效加速强化学习算法 - - MindSpore Reinforcement旨在通过在硬件加速器(如GPU或Ascend AI处理器)上执行计算,加速对强化学习算法的训练。它不仅加速了神经网络的计算,而且还将actor和learner的逻辑转换为具有并行算子的计算图。MindSpore利用框架自身在编译和自动并行上的特性优势来执行这些计算图。 - -未来路标 ---------- - -- MindSpore Reinforcement初始版本包含一个稳定的API,用于实现强化学习算法和使用MindSpore的计算图执行计算。现已支持算法并行和半自动分布式执行能力,支持多agent场景,暂不支持自动的分布式能力。MindSpore Reinforcement的后续版本将包含这些功能,敬请期待。 - -使用MindSpore Reinforcement的典型场景 --------------------------------------- - -- `训练深度Q网络 `_ - - DQN算法使用经验回放技术来维护先前的观察结果,进行off-policy学习。 - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: 安装部署 - - reinforcement_install - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: 使用指南 - - custom_config_info - dqn - replaybuffer - environment - -.. toctree:: - :maxdepth: 1 - :caption: API参考 - - reinforcement - -.. toctree:: - :glob: - :maxdepth: 1 - :caption: RELEASE NOTES - - RELEASE diff --git a/docs/reinforcement/docs/source_zh_cn/reinforcement_install.md b/docs/reinforcement/docs/source_zh_cn/reinforcement_install.md deleted file mode 100644 index f13d8f9f5721c3a167953ada8fd5120f1a99bdef..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/source_zh_cn/reinforcement_install.md +++ /dev/null @@ -1,36 +0,0 @@ -# 安装MindSpore Reinforcement - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/reinforcement/docs/source_zh_cn/reinforcement_install.md) - -MindSpore Reinforcement依赖MindSpore训练推理框架,安装完[MindSpore](https://gitee.com/mindspore/mindspore#安装),再安装MindSpore Reinforcement。可以采用pip安装或者源码编译安装两种方式。 - -## pip安装 - -使用pip命令安装,请从[MindSpore Reinforcement下载页面](https://www.mindspore.cn/versions)下载并安装whl包。 - -```shell -pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{ms_version}/Reinforcement/any/mindspore_rl-{mr_version}-py3-none-any.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple -``` - -> - 在联网状态下,安装whl包时会自动下载MindSpore Reinforcement安装包的依赖项(依赖项详情参见requirement.txt),其余情况需自行安装。 -> - `{ms_version}`表示与MindSpore Reinforcement匹配的MindSpore版本号,例如下载0.1.0版本MindSpore Reinforcement时,`{ms_version}`应写为1.5.0。 -> - `{mr_version}`表示MindSpore Reinforcement版本号,例如下载0.1.0版本MindSpore Reinforcement时,`{mr_version}`应写为0.1.0。 - -## 源码编译安装 - -下载[源码](https://github.com/mindspore-lab/mindrl),下载后进入`reinforcement`目录。 - -```shell -bash build.sh -pip install output/mindspore_rl-0.1.0-py3-none-any.whl -``` - -其中,`build.sh`为`reinforcement`目录下的编译脚本文件。 - -## 验证安装是否成功 - -执行以下命令,验证安装结果。导入Python模块不报错即安装成功: - -```python -import mindspore_rl -``` diff --git a/docs/reinforcement/docs/source_zh_cn/replaybuffer.md b/docs/reinforcement/docs/source_zh_cn/replaybuffer.md deleted file mode 100644 index 3399972f68e31a4cad163e5432fdfd752569a7d3..0000000000000000000000000000000000000000 --- a/docs/reinforcement/docs/source_zh_cn/replaybuffer.md +++ /dev/null @@ -1,136 +0,0 @@ -# ReplayBuffer 使用说明 - -[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/reinforcement/docs/source_zh_cn/replaybuffer.md) - -## ReplayBuffer 简介 - -在强化学习中,ReplayBuffer是一个常用的基本数据存储方式,它的功能在于存放智能体与环境交互得到的数据。 -使用ReplayBuffer可以解决以下几个问题: - -1. 存储的历史经验数据,可以通过采样的方式抽取,以打破训练数据的相关性,使抽样的数据具有独立同分布的特性。 -2. 可以提供数据的临时存储,提高数据的利用率。 - -## MindSpore Reinforcement Learning 的 ReplayBuffer 实现 - -一般情况下,算法人员使用原生的Python数据结构或Numpy的数据结构来构造ReplayBuffer,或者一般的强化学习框架也提供了标准的API封装。不同的是,MindSpore实现了设备端的ReplayBuffer结构,一方面能在使用GPU硬件时减少数据在Host和Device之间的频繁拷贝,另一方面,以MindSpore算子的形式表达ReplayBuffer,可以构建完整的IR图,使能MindSpore GRAPH_MODE的各种图优化,提升整体的性能。 - -在MindSpore中,提供了两种ReplayBuffer,分别是UniformReplayBuffer和PriorityReplayBuffer,分别用于常用的FIFO存储和带有优先级的存储。下面以UniformReplayBuffer为例介绍实现及使用。 -以一个List的Tensor表示,每个Tensor代表一组按列存储的数据(如一组[state, action, reward])。新放入UniformReplayBuffer中的数据以FIFO的机制进行内容的更新,具有插入、查找、采样等功能。 - -### 参数解释 - -创建一个UniformReplayBuffer,初始化参数为batch_size、capacity、shapes、types。 - -* batch_size表示sample一次数据的大小,整数值。 -* capacity表示创建UniformReplayBuffer的总容量,整数值。 -* shapes表示Buffer中,每一组数据的shape大小,以list表示。 -* types表示Buffer中,每一组数据对应的数据类型,以list表示。 - -### 功能介绍 - -#### 1 插入 -- insert - -插入方法接收一组数据作为入参,需满足数据的shape和type与创建的UniformReplayBuffer参数一致。无输出。 -为了模拟循环队列的FIFO特性,我们使用两个游标来确定队列的头部head和有效长度count。下图展示了几次插入操作的过程。 - -1. buffer的总大小为6,初始状态时,游标head和count均为0。 -2. 插入一个batch_size为2的数据后,当前的head不变,count加2。 -3. 继续插入一个batch_size为4的数据后,队列已满,count为6。 -4. 继续插入一个batch_size为2的数据后,覆盖式更新旧数据,并将head加2。 - -![insert 示意图](images/insert.png) - -#### 2 查找 -- get_item - -查找方法接受一个index作为入参,表示需要查找的数据的具体位置。输出为一组Tensor。如下图所示: - -1. UniformReplayBuffer刚满或未满的情况下,根据index直接找到对应数据。 -2. 对于已经覆盖过的数据,通过游标进行重映射。 - -![get_item 示意图](images/get.png) - -#### 3 采样 -- sample - -采样方法无输入,输出为一组Tensor,大小为创建UniformReplayBuffer时的batch_size大小。如下图所示: -假定batch_size为3,算子中会随机产生一组indexes,这组随机的indexes有两种情况: - -1. 保序:每个index即代表真实的数据位置,需要经过游标重映射操作。 -2. 不保序:每个index不代表真实位置,直接获取。 - -两种方式对随机性有轻微影响,默认采用不保序的方式以获取最佳的性能。 - -![sample 示意图](images/sample.png) - -## MindSpore Reinforcement Learning 的 UniformReplayBuffer 使用介绍 - -### UniformReplayBuffer的创建 - -MindSpore Reinforcement Learning 提供了标准的ReplayBuffer API。用户可以使用配置文件的方式使用框架创建的ReplayBuffer,形如[dqn](https://github.com/mindspore-lab/mindrl/tree/master/mindspore_rl/algorithm/dqn/config.py)的配置文件: - -```python -'replay_buffer': - {'number': 1, - 'type': UniformReplayBuffer, - 'capacity': 100000, - 'data_shape': [(4,), (1,), (1,), (4,)], - 'data_type': [ms.float32, ms.int32, ms.foat32, ms.float32], - 'sample_size': 64} -``` - -或者,用户可以直接使用API接口,创建所需的数据结构: - -```python -from mindspore_rl.core.uniform_replay_buffer import UniformReplayBuffer -import mindspore as ms -sample_size = 2 -capacity = 100000 -shapes = [(4,), (1,), (1,), (4,)] -types = [ms.float32, ms.int32, ms.float32, ms.float32] -replaybuffer = UniformReplayBuffer(sample_size, capacity, shapes, types) -``` - -### 使用创建的UniformReplayBuffer - -以API形式创建的[UniformReplayBuffer](https://github.com/mindspore-lab/mindrl/tree/master/mindspore_rl/core/uniform_replay_buffer.py)进行数据操作为例: - -* 插入操作 - -```python -state = ms.Tensor([0.1, 0.2, 0.3, 0.4], ms.float32) -action = ms.Tensor([1], ms.int32) -reward = ms.Tensor([1], ms.float32) -new_state = ms.Tensor([0.4, 0.3, 0.2, 0.1], ms.float32) -replaybuffer.insert([state, action, reward, new_state]) -replaybuffer.insert([state, action, reward, new_state]) -``` - -* 查找操作 - -```python -exp = replaybuffer.get_item(0) -``` - -* 采样操作 - -```python -samples = replaybuffer.sample() -``` - -* 重置操作 - -```python -replaybuffer.reset() -``` - -* 当前buffer使用的大小 - -```python -size = replaybuffer.size() -``` - -* 判断当前buffer是否已满 - -```python -if replaybuffer.full(): - print("Full use of this buffer.") -```