diff --git a/docs/mindspore/source_en/faq/network_compilation.md b/docs/mindspore/source_en/faq/network_compilation.md index 03648f0f875b5a7be043d2123497b94e0eb0152e..ea6b8e474722936f52ca12a42a390bf26f479a36 100644 --- a/docs/mindspore/source_en/faq/network_compilation.md +++ b/docs/mindspore/source_en/faq/network_compilation.md @@ -37,7 +37,7 @@ A: MindSpore does not support the `yield` syntax in graph mode. A: In the inference stage of front-end compilation, the abstract types of nodes, including `type` and `shape`, will be inferred. Common abstract types include `AbstractScalar`, `AbstractTensor`, `AbstractFunction`, `AbstractTuple`, `AbstractList`, etc. In some scenarios, such as multi-branch scenarios, the abstract types of the return values of different branches will be `join` to infer the abstract type of the returned result. If these abstract types do not match, or `type`/`shape` are inconsistent, the above exception will be thrown. -When an error similar to `Type Join Failed: dtype1 = Float32, dtype2 = Float16` appears, it means that the data types are inconsistent, resulting in an exception when joining abstract. According to the provided data types and code line, the error can be quickly located. In addition, the specific abstract information and node information are provided in the error message. You can view the MindIR information through the `analyze_fail.ir` file to locate and solve the problem. For specific introduction of MindIR, please refer to [MindSpore IR (MindIR)](https://www.mindspore.cn/docs/en/master/design/all_scenarios.html#mindspore-ir-mindir). The code sample is as follows: +When an error similar to `Type Join Failed: dtype1 = Float32, dtype2 = Float16` appears, it means that the data types are inconsistent, resulting in an exception when joining abstract. According to the provided data types and code line, the error can be quickly located. In addition, the specific abstract information and node information are provided in the error message. You can view the MindIR information through the `analyze_fail.ir` file to locate and solve the problem. The code sample is as follows: ```python import numpy as np diff --git a/docs/mindspore/source_en/features/compile/graph_optimization.md b/docs/mindspore/source_en/features/compile/graph_optimization.md index 3c5b9cfe8294157b679330c2ddab03b2e7965827..9c9687f514300854efce7b1707c1f42224f130de 100644 --- a/docs/mindspore/source_en/features/compile/graph_optimization.md +++ b/docs/mindspore/source_en/features/compile/graph_optimization.md @@ -2,7 +2,7 @@ [![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/master/docs/mindspore/source_en/features/compile/graph_optimization.md) -Similar to traditional compilers, MindSpore also performs compilation optimization after graph construction. The main purpose of compilation optimization is to analyze and transform MindSpore's intermediate representation [MindIR](https://www.mindspore.cn/docs/en/master/design/all_scenarios.html#mindspore-ir-mindir) by static analysis techniques to achieve goals such as reducing the size of the target code, improving execution efficiency, lowering runtime resource consumption, or enhancing other performance metrics. Compilation optimization is a crucial part of the graph compilation system and plays an extremely important role in improving the performance and resource utilization of the entire neural network model. Compared with the original code that has not been optimized, compilation optimization can bring several times or even tens of times performance improvement. +Similar to traditional compilers, MindSpore also performs compilation optimization after graph construction. The main purpose of compilation optimization is to analyze and transform MindSpore's intermediate representation MindIR by static analysis techniques to achieve goals such as reducing the size of the target code, improving execution efficiency, lowering runtime resource consumption, or enhancing other performance metrics. Compilation optimization is a crucial part of the graph compilation system and plays an extremely important role in improving the performance and resource utilization of the entire neural network model. Compared with the original code that has not been optimized, compilation optimization can bring several times or even tens of times performance improvement. This section mainly introduces front-end compilation optimization techniques that are independent of specific hardware. Hardware-specific back-end compilation optimization techniques are not within the scope of this discussion. diff --git a/docs/mindspore/source_en/features/compile/multi_level_compilation.md b/docs/mindspore/source_en/features/compile/multi_level_compilation.md index c6282d3283987a6624a08b93a5d0b6cb795611cc..43deec2a3e063987672f8eb80b8fc200b1774100 100644 --- a/docs/mindspore/source_en/features/compile/multi_level_compilation.md +++ b/docs/mindspore/source_en/features/compile/multi_level_compilation.md @@ -101,7 +101,7 @@ The overall architecture of graph-kernel fusion is shown in the figure below. Th The optimized computational graph is passed to MindSpore AKG as a subgraph for further back-end optimization and target code generation. -![graphkernel](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_zh_cn/design/images/graphkernel.png) +![graphkernel](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_zh_cn/features/images/graphkernel.png) By following these steps, we can obtain two aspects of performance gains: diff --git a/docs/mindspore/source_en/features/data_engine.md b/docs/mindspore/source_en/features/data_engine.md index ba9000f81187abfcdd973b06632ac33610a5299c..fada48b563d61b9066e17ffa3fd874abeaacfa9a 100644 --- a/docs/mindspore/source_en/features/data_engine.md +++ b/docs/mindspore/source_en/features/data_engine.md @@ -16,7 +16,7 @@ The core of MindSpore training data processing engine is to efficiently and flex Please refer to the instructions for usage: [Data Loading And Processing](https://www.mindspore.cn/docs/en/master/features/dataset/overview.html) -![image](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_en/design/images/data/data_engine_en.png) +![image](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_en/features/images/data/data_engine_en.png) MindSpore training data engine also provides efficient loading and sampling capabilities of datasets in fields, such as scientific computing-electromagnetic simulation, remote sensing large-format image processing, helping MindSpore achieve full-scene support. @@ -26,7 +26,7 @@ MindSpore training data engine also provides efficient loading and sampling capa The design of MindSpore considers the efficiency, flexibility and adaptability of data processing in different scenarios. The whole data processing subsystem is divided into the following modules: -![image](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_zh_cn/design/images/data/architecture.png) +![image](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_zh_cn/features/images/data/architecture.png) - API: The data processing process is represented in MindSpore in the form of a graph, called a data graph. MindSpore provides Python API to define data graphs externally and implement graph optimization and graph execution internally. - Data Processing Pipeline: Data loading and pre-processing multi-step parallel pipeline, which consists of the following components. diff --git a/docs/mindspore/source_en/features/overview.md b/docs/mindspore/source_en/features/overview.md index 9a6f425b186b4958e94e6036d90394a324acd603..9d49376aa4bf3db34710b0d1a161396573228c12 100644 --- a/docs/mindspore/source_en/features/overview.md +++ b/docs/mindspore/source_en/features/overview.md @@ -33,13 +33,13 @@ MindSpore is a full-scenario deep learning framework designed to achieve three m ### Fusion of Functional and Object-Oriented Programming Paradigms -MindSpore provides both object-oriented and function-oriented [programming paradigms](https://www.mindspore.cn/docs/en/master/design/programming_paradigm.html), both of which can be used to construct network algorithms and training processes. +MindSpore provides both object-oriented and function-oriented programming paradigms, both of which can be used to construct network algorithms and training processes. Developers can derive from the nn.Cell class to define AI networks or layers with required functionality, and assemble various defined layers through nested object calls to complete the definition of the entire AI network. At the same time, developers can also define a pure Python function that can be source-to-source compiled by MindSpore, and accelerate its execution through functions or decorators provided by MindSpore. Under the requirements of MindSpore's static syntax, pure Python functions can support nested subfunctions, control logic, and even recursive function expressions. Therefore, based on this programming paradigm, developers can flexibly enable certain functional features, making it easier to express business logic. -MindSpore implements [functional differential programming](https://www.mindspore.cn/docs/en/master/design/programming_paradigm.html#functional-differential-programming), which performs differentiation based on the call chain according to the calling relationship for function objects that can be differentiated. This automatic differentiation strategy better aligns with mathematical semantics and has an intuitive correspondence with composite functions in basic algebra. As long as the derivative formulas of basic functions are known, the derivative formula of a composite function composed of any basic functions can be derived. +MindSpore implements functional differential programming, which performs differentiation based on the call chain according to the calling relationship for function objects that can be differentiated. This automatic differentiation strategy better aligns with mathematical semantics and has an intuitive correspondence with composite functions in basic algebra. As long as the derivative formulas of basic functions are known, the derivative formula of a composite function composed of any basic functions can be derived. At the same time, based on the functional programming paradigm, MindSpore provides rich higher-order functions such as vmap, shard, and other built-in higher-order functions. Like the differential function grad, these allow developers to conveniently construct a function or object as a parameter for higher-order functions. Higher-order functions, after internal compilation optimization, generate optimized versions of developers' functions, implementing features such as vectorization transformation and distributed parallel partitioning. @@ -57,7 +57,7 @@ MindSpore builds the graph structure of neural networks based on Python, which p Native Python expressions can directly enable static graph mode execution based on Python control flow keywords, making the programming unification of dynamic and static graphs higher. At the same time, developers can flexibly control Python code fragments in dynamic and static graph modes based on MindSpore's interfaces. That is, local functions can be executed in static graph mode ([mindspore.jit](https://www.mindspore.cn/docs/en/master/api_python/mindspore/mindspore.jit.html)) while other functions are executed in dynamic graph mode. This allows developers to flexibly specify function fragments for static graph optimization and acceleration when interleaving with common Python libraries and custom Python functions, without sacrificing the programming ease of interleaved execution. -### [Distributed Parallel Computing](https://www.mindspore.cn/docs/en/master/design/distributed_training_design.html) +### Distributed Parallel Computing As large model parameters continue to grow, complex and diverse distributed parallel strategies are needed to address this challenge. MindSpore has built-in multi-dimensional distributed training strategies that developers can flexibly assemble and use. Through parallel abstraction, communication operations are hidden, simplifying the complexity of parallel programming for developers. @@ -71,7 +71,7 @@ At the same time, MindSpore also provides various parallel strategies such as pi Based on compilation technology, MindSpore provides rich hardware-independent optimizations such as IR fusion, algebraic simplification, constant folding, and common subexpression elimination. At the same time, it also provides various hardware optimization capabilities for different hardware such as NPU and GPU, thereby better leveraging the large-scale computational acceleration capabilities of hardware. -#### [Graph-Algorithm Fusion](https://www.mindspore.cn/docs/en/master/design/multi_level_compilation.html#graph-kernel-fusion) +#### [Graph-Algorithm Fusion](https://www.mindspore.cn/docs/en/master/features/compile/multi_level_compilation.html#graph-kernel-fusion) Mainstream AI computing frameworks like MindSpore typically define operators from the perspective of developer understanding and ease of use. Each operator carries varying amounts of computation and computational complexity. However, from a hardware execution perspective, this natural operator computational division based on the developer's perspective is not efficient and cannot fully utilize hardware computational capabilities. This is mainly reflected in: @@ -95,12 +95,12 @@ Loop sinking is an optimization based on On Device execution, aimed at further r Data sinking means that data is directly transmitted to the Device through channels. -### [Unified Deployment Across All Scenarios](https://www.mindspore.cn/docs/en/master/design/all_scenarios.html) +### Unified Deployment Across All Scenarios MindSpore is an AI framework that integrates training and inference, supporting both training and inference functions. At the same time, MindSpore supports various chips such as CPU, GPU, and NPU, and provides unified programming interfaces and can generate offline models that can be loaded and executed on various hardware. According to actual execution environments and business requirements, MindSpore provides multiple specification versions, supporting deployment on cloud, servers, mobile and other embedded devices, and ultra-lightweight devices such as earphones. -### [Third-Party Hardware Integration](https://www.mindspore.cn/docs/en/master/design/pluggable_device.html) +### [Third-Party Hardware Integration](https://www.mindspore.cn/docs/en/master/features/runtime/pluggable_device.html) -Based on the unified MindIR, MindSpore has built an open AI architecture that supports third-party chip plugins, standardization, and low-cost rapid integration, which can connect to GPU series chips as well as various DSA chips. MindSpore provides two chip integration methods: Kernel mode and Graph mode, allowing chip manufacturers to choose the integration method according to their own characteristics. \ No newline at end of file +Based on the unified MindIR, MindSpore has built an open AI architecture that supports third-party chip plugins, standardization, and low-cost rapid integration, which can connect to GPU series chips as well as various DSA chips. MindSpore provides two chip integration methods: Kernel mode and Graph mode, allowing chip manufacturers to choose the integration method according to their own characteristics. diff --git a/docs/mindspore/source_en/features/parallel/data_parallel.md b/docs/mindspore/source_en/features/parallel/data_parallel.md index aef44a6c9b7b8c2f9b8b0fa38187711bc16a5f88..3fa7ff6c141734c14f9bcb013cb215d6cfa01735 100644 --- a/docs/mindspore/source_en/features/parallel/data_parallel.md +++ b/docs/mindspore/source_en/features/parallel/data_parallel.md @@ -15,7 +15,7 @@ Related interfaces are as follows: ## Overall Process -![Overall Process](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_zh_cn/design/images/data_parallel.png) +![Overall Process](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_zh_cn/features/images/data_parallel.png) 1. Environmental dependencies diff --git a/docs/mindspore/source_en/features/runtime/memory_manager.md b/docs/mindspore/source_en/features/runtime/memory_manager.md index ce1857ee237164681de0f7d630058a91abc6b140..1162827adce30c41f68c497e4da76979301607b9 100644 --- a/docs/mindspore/source_en/features/runtime/memory_manager.md +++ b/docs/mindspore/source_en/features/runtime/memory_manager.md @@ -9,7 +9,7 @@ Device memory (hereinafter referred to as memory) is the most important resource 1. Memory pool serves as a base for memory management and can effectively avoid the overhead of frequent dynamic allocation of memory. 2. Memory reuse algorithm, as a core competency in memory management, needs to have efficient memory reuse results as well as minimal memory fragmentation. -![memory_manager](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_en/design/images/multi_level_compilation/jit_level_memory_manage.png) +![memory_manager](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_en/features/images/multi_level_compilation/jit_level_memory_manage.png) ## Interfaces @@ -22,7 +22,7 @@ The memory management-related interfaces are detailed in [runtime interfaces](ht The core idea of memory pool as a base for memory management is to pre-allocate a large block of contiguous memory, allocate it directly from the pool when applying for memory, and return it to the pool for reuse when releasing it, instead of frequently calling the memory application and release interfaces in the system, which reduces the overhead of frequent dynamic allocations, and improves system performance. MindSpore mainly uses the BestFit memory allocation algorithm, supports dynamic expansion of memory blocks and defragmentation, and sets the initialization parameters of the memory pool through the interface [mindspore.runtime.set_memory(init_size,increase_size,max_size)](https://www.mindspore.cn/docs/en/master/api_python/runtime/mindspore.runtime.set_memory.html) to control the dynamic expansion size and maximum memory usage. -![memory_pool](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_en/design/images/multi_level_compilation/jit_level_memory_pool.png) +![memory_pool](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_en/features/images/multi_level_compilation/jit_level_memory_pool.png) 1. Slicing operation: When memory is allocated, free areas are sorted according to their sizes, the first free area that meets the requirements is found, allocated on demand, the excess is cut, and a new block of free memory is inserted. 2. Merge operation: When memory is reclaimed, neighboring free memory blocks are reclaimed and merged into one large free memory block. @@ -55,4 +55,4 @@ Dynamic memory reuse is just the opposite of static memory reuse, transferring t 4. Reset the initial reference count from step 1. - Pros: Dynamic memory reuse during the graph execution phase, fully generalized, especially friendly for dynamic shape and control flow scenarios. -- Cons: The graph execution phase is reused on demand, obtains no global information, and is prone to memory fragmentation. \ No newline at end of file +- Cons: The graph execution phase is reused on demand, obtains no global information, and is prone to memory fragmentation. diff --git a/docs/mindspore/source_en/features/runtime/multilevel_pipeline.md b/docs/mindspore/source_en/features/runtime/multilevel_pipeline.md index 62276109aae103df85befb55676b37b5b7f83609..235120d55c435a9349a24584098aea1a7f62f1e6 100644 --- a/docs/mindspore/source_en/features/runtime/multilevel_pipeline.md +++ b/docs/mindspore/source_en/features/runtime/multilevel_pipeline.md @@ -12,7 +12,7 @@ Runtime scheduling for an operator mainly includes the operations InferShape (in Multi-stage flow is a key performance optimization point for runtime, which improves runtime scheduling efficiency by task decomposition and parallel flow issued to give full play to CPU multi-core performance. The main flow is as follows: -![rt_pipeline](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_en/design/images/multi_level_compilation/jit_level_rt_pipeline.png) +![rt_pipeline](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_en/features/images/multi_level_compilation/jit_level_rt_pipeline.png) 1. Task decomposition: the operator scheduling is decomposed into three tasks InferShape, Resize and Launch. 2. Queue creation: Create three queues, Infer Queue, Resize Queue and Launch Queue, for taking over the three tasks in step 1. diff --git a/docs/mindspore/source_en/features/runtime/multistream_concurrency.md b/docs/mindspore/source_en/features/runtime/multistream_concurrency.md index bc482deafd1f84b73470b226b121ad302bcbe763..b99d645b3ffac040d2061977cdcfea8d372adda0 100644 --- a/docs/mindspore/source_en/features/runtime/multistream_concurrency.md +++ b/docs/mindspore/source_en/features/runtime/multistream_concurrency.md @@ -10,7 +10,7 @@ During the training of large-scale deep learning models, the importance of commu Traditional multi-stream concurrency methods usually rely on manual configuration, which is not only cumbersome and error-prone, but also often difficult to achieve optimal concurrency when faced with complex computational graphs. MindSpore's automatic stream assignment feature automatically identifies and assigns concurrency opportunities in the computational graph by means of an intelligent algorithm, and assigns different operators to different streams for execution. This automated allocation process not only simplifies user operations, but also enables dynamic adjustment of stream allocation policies at runtime to accommodate different computing environments and resource conditions. -![multi_stream](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_en/design/images/multi_level_compilation/jit_level_multi_stream.png) +![multi_stream](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_en/features/images/multi_level_compilation/jit_level_multi_stream.png) The principles are as follows: diff --git a/docs/mindspore/source_en/index.rst b/docs/mindspore/source_en/index.rst index b2abe11b6715e79a7738e3b5cbea31795cbfa822..b0e36e72f37598e87c6c51f6d7dd36367899600f 100644 --- a/docs/mindspore/source_en/index.rst +++ b/docs/mindspore/source_en/index.rst @@ -6,7 +6,6 @@ MindSpore Documentation :maxdepth: 1 :hidden: - design/index features/index api_python/index faq/index @@ -19,7 +18,7 @@ MindSpore Documentation
- +
Design diff --git a/docs/mindspore/source_zh_cn/faq/network_compilation.md b/docs/mindspore/source_zh_cn/faq/network_compilation.md index c0dafe92b2ec5bd67e71c2f477689b51ee8fe984..233a8a87c409967932be66d200fe8e00cd28a58c 100644 --- a/docs/mindspore/source_zh_cn/faq/network_compilation.md +++ b/docs/mindspore/source_zh_cn/faq/network_compilation.md @@ -37,7 +37,7 @@ A: MindSpore在静态图模式下不支持 `yield` 语法。 A: 在前端编译的推理阶段,会对节点的抽象类型(包含 `type`、`shape` 等)进行推导,常见抽象类型包括 `AbstractScalar`、`AbstractTensor`、`AbstractFunction`、`AbstractTuple`、`AbstractList` 等。在一些场景比如多分支场景,会对不同分支返回值的抽象类型进行 `join` 合并,推导出返回结果的抽象类型。如果抽象类型不匹配,或者 `type`/`shape` 不一致,则会抛出以上异常。 -当出现类似`Type Join Failed: dtype1 = Float32, dtype2 = Float16`的报错时,说明数据类型不一致,导致抽象类型合并失败。根据提供的数据类型和代码行信息,可以快速定位出错范围。此外,报错信息中提供了具体的抽象类型信息、节点信息,可以通过 `analyze_fail.ir` 文件查看MindIR信息,定位解决问题。关于MindIR的具体介绍,可以参考[MindSpore IR(MindIR)](https://www.mindspore.cn/docs/zh-CN/master/design/all_scenarios.html#中间表示mindir)。代码样例如下: +当出现类似`Type Join Failed: dtype1 = Float32, dtype2 = Float16`的报错时,说明数据类型不一致,导致抽象类型合并失败。根据提供的数据类型和代码行信息,可以快速定位出错范围。此外,报错信息中提供了具体的抽象类型信息、节点信息,可以通过 `analyze_fail.ir` 文件查看MindIR信息,定位解决问题。代码样例如下: ```python import numpy as np diff --git a/docs/mindspore/source_zh_cn/features/compile/graph_optimization.md b/docs/mindspore/source_zh_cn/features/compile/graph_optimization.md index 34e24d8071e26923f820416feb01c9e226fe779b..7373042e104a936e825946169f7434390b22b110 100644 --- a/docs/mindspore/source_zh_cn/features/compile/graph_optimization.md +++ b/docs/mindspore/source_zh_cn/features/compile/graph_optimization.md @@ -2,7 +2,7 @@ [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/mindspore/source_zh_cn/features/compile/graph_optimization.md) -与传统编译器类似,MindSpore 在进行完构图之后,也会进行编译优化。编译优化的主要目的是通过静态分析技术对 MindSpore 的中间表示 [MindIR](https://www.mindspore.cn/docs/zh-CN/master/design/all_scenarios.html#%E4%B8%AD%E9%97%B4%E8%A1%A8%E7%A4%BAmindir) 进行分析和转换,以达成减小目标代码大小、提升代码执行效率、降低运行时资源开销或者提升其它性能指标的目的。编译优化是图编译系统中的重要一环,对提升整个神经网络模型的性能和资源利用率有着极其重要的意义,相较于未经过编译优化的原始代码,编译优化可能带来数倍甚至数十倍的性能提升。 +与传统编译器类似,MindSpore 在进行完构图之后,也会进行编译优化。编译优化的主要目的是通过静态分析技术对 MindSpore 的中间表示 MindIR 进行分析和转换,以达成减小目标代码大小、提升代码执行效率、降低运行时资源开销或者提升其它性能指标的目的。编译优化是图编译系统中的重要一环,对提升整个神经网络模型的性能和资源利用率有着极其重要的意义,相较于未经过编译优化的原始代码,编译优化可能带来数倍甚至数十倍的性能提升。 本节主要介绍独立于特定硬件的前端编译优化技术,特定于硬件的后端编译优化技术不在本节的讨论范围之内。 diff --git a/docs/mindspore/source_zh_cn/features/overview.md b/docs/mindspore/source_zh_cn/features/overview.md index 8cf8b017ba16cef9e4e23dccd27411499b030465..414bee3a89fbfac5d54005f46decc2e72876306b 100644 --- a/docs/mindspore/source_zh_cn/features/overview.md +++ b/docs/mindspore/source_zh_cn/features/overview.md @@ -33,13 +33,13 @@ MindSpore整体架构如下: ### 函数式和对象式融合编程范式 -MindSpore提供面向对象和面向函数的[编程范式](https://www.mindspore.cn/docs/zh-CN/master/design/programming_paradigm.html),二者都可用来构建网络算法和训练流程。 +MindSpore提供面向对象和面向函数的编程范式,二者都可用来构建网络算法和训练流程。 开发者可以基于nn.Cell类派生定义所需功能的AI网络或层(layer),并可通过对象的嵌套调用的方式将已定义的各种layer进行组装,完成整个AI网络的定义。 同时,开发者也可以定义一个可被MindSpore源到源编译转换的Python纯函数,通过MindSpore提供的函数或装饰器,将其加速执行。在满足MindSpore静态语法的要求下,Python纯函数可以支持子函数嵌套、控制逻辑甚至是递归函数表达。因此基于此编程范式,开发者可灵活使能一些功能特性,更易于表达业务逻辑。 -MindSpore实现了[函数式微分编程](https://www.mindspore.cn/docs/zh-CN/master/design/programming_paradigm.html#函数式微分编程),对可被微分求导的函数对象,按照调用关系,基于调用链进行求导。采取这样自动微分策略更符合数学语义,与基本代数中的复合函数有直观的对应关系,只要已知基础函数的求导公式,就能推导出由任意基础函数组成的复合函数的求导公式。 +MindSpore实现了函数式微分编程,对可被微分求导的函数对象,按照调用关系,基于调用链进行求导。采取这样自动微分策略更符合数学语义,与基本代数中的复合函数有直观的对应关系,只要已知基础函数的求导公式,就能推导出由任意基础函数组成的复合函数的求导公式。 同时,基于函数式编程范式,MindSpore提供了丰富高阶函数如vmap、shard等内置高阶函数功能。与微分求导函数grad一样,可以让开发者方便的构造一个函数或对象,作为高阶函数的参数。高阶函数经过内部编译优化,生成针对开发者函数的优化版本,实现如向量化变换、分布式并行切分等特点功能。 @@ -57,7 +57,7 @@ MindSpore基于Python构建神经网络的图结构,相比于传统的静态 原生Python表达可基于Python控制流关键字,直接使能静态图模式的执行,使得动静态图的编程统一性更高。同时开发者基于MindSpore的接口,可以灵活的对Python代码片段进行动静态图模式控制。即可以将程序局部函数以静态图模式执行([mindspore.jit](https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore/mindspore.jit.html))而同时其他函数按照动态图模式执行。从而使得在与常用Python库、自定义Python函数进行穿插执行使用时,开发者可以灵活指定函数片段进行静态图优化加速,而不牺牲穿插执行的编程易用性。 -### [分布式并行](https://www.mindspore.cn/docs/zh-CN/master/design/distributed_training_design.html) +### 分布式并行 大模型参数越来越大,需要复杂和多样的分布式并行策略应对,MindSpore内置了多维分布式训练策略,可供开发者灵活组装使用。并通过并行抽象,隐藏通讯操作,简化开发者并行编程的复杂度。 @@ -71,7 +71,7 @@ MindSpore在并行化策略搜索中引入了张量重排布技术(Tensor Redi MindSpore基于编译技术,提供了丰富的硬件无关优化,如IR融合、代数化简、常数折叠、公共子表达式消除等。同时针对NPU、GPU等不同硬件,也提供各种硬件优化能力,从而更好的发挥硬件的大规模计算加速能力。 -#### [图算融合](https://www.mindspore.cn/docs/zh-CN/master/design/multi_level_compilation.html#图算融合) +#### [图算融合](https://www.mindspore.cn/docs/zh-CN/master/features/compile/multi_level_compilation.html#图算融合) MindSpore等主流AI计算框架对开发者提供的算子通常是从开发中可理解、易使用角度进行定义。每个算子承载的计算量不等,计算复杂度也各不相同。但从硬件执行角度看,这种天然的、基于用开发者角度的算子计算量划分,并不高效,也无法充分发挥硬件资源计算能力。主要体现在: @@ -95,12 +95,12 @@ Host侧CPU负责将图或算子下发到昇腾芯片。昇腾芯片由于具备 数据下沉即数据通过通道直接传送到Device上。 -### [全场景统一部署](https://www.mindspore.cn/docs/zh-CN/master/design/all_scenarios.html) +### 全场景统一部署 MindSpore是训推一体的AI框架,同时支持训练和推理等功能。同时MindSpore支持CPU、GPU、NPU等多种芯片,并且在不同芯片上提供统一的编程使用接口以及可生成在多种硬件上加载执行的离线模型。 MindSpore按照实际执行环境和业务需求,提供多种规格的版本形态,支持部署在云端、服务器端、手机等嵌入式设备端以及耳机等超轻量级设备端上的部署执行。 -### [三方硬件接入](https://www.mindspore.cn/docs/zh-CN/master/design/pluggable_device.html) +### [三方硬件接入](https://www.mindspore.cn/docs/zh-CN/master/features/runtime/pluggable_device.html) MindSpore基于统一MindIR构建了开放式AI架构,支持第三方芯片插件化、标准化、低成本快速对接,可接入GPU系列芯片亦可接入各类DSA芯片。MindSpore提供Kernel模式对接和Graph模式对接两种芯片接入方式,芯片产商可根据自身特点进行接入方式选择。 diff --git a/docs/mindspore/source_zh_cn/features/parallel/data_parallel.md b/docs/mindspore/source_zh_cn/features/parallel/data_parallel.md index 5e6a541501fb7ae2f657ad68066cc64882679d25..26bdfeea1f0aecb8f54784fa95cb02a9a5122066 100644 --- a/docs/mindspore/source_zh_cn/features/parallel/data_parallel.md +++ b/docs/mindspore/source_zh_cn/features/parallel/data_parallel.md @@ -15,7 +15,7 @@ ## 整体流程 -![整体流程](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_zh_cn/design/images/data_parallel.png) +![整体流程](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_zh_cn/features/images/data_parallel.png) 1. 环境依赖 diff --git a/docs/mindspore/source_zh_cn/index.rst b/docs/mindspore/source_zh_cn/index.rst index 8dce7e76ad55a1581b85e2dbc709f92c94f3b83e..2d501fd1b10904ecb3f3e4526b343d2db874dbce 100644 --- a/docs/mindspore/source_zh_cn/index.rst +++ b/docs/mindspore/source_zh_cn/index.rst @@ -6,7 +6,6 @@ MindSpore 文档 :maxdepth: 1 :hidden: - design/index features/index api_python/index faq/index @@ -19,7 +18,7 @@ MindSpore 文档
- +
设计 diff --git a/tutorials/source_en/beginner/dataset.md b/tutorials/source_en/beginner/dataset.md index bb36f2c8da1f96023152d226e5ba9a894ec1f561..9989fb7fc032ba86c54cf70c12fbfad3329f92e6 100644 --- a/tutorials/source_en/beginner/dataset.md +++ b/tutorials/source_en/beginner/dataset.md @@ -6,7 +6,7 @@ Data is the foundation of deep learning, and high-quality data input is beneficial to the entire deep neural network. -MindSpore provides Pipeline-based [Data Engine](https://www.mindspore.cn/docs/en/master/design/data_engine.html) and achieves efficient data preprocessing through `Dataset`, `Transforms` and `Batch` operator. The pipeline nodes are: +MindSpore provides Pipeline-based [Data Engine](https://www.mindspore.cn/docs/en/master/features/data_engine.html) and achieves efficient data preprocessing through `Dataset`, `Transforms` and `Batch` operator. The pipeline nodes are: 1. Dataset is the start of Pipeline and is used to load raw data to memory. `mindspore.dataset` provides [built-in dataset interfaces](https://www.mindspore.cn/docs/en/master/api_python/mindspore.dataset.loading.html) for loading text, image, audio, etc., and provides [interfaces](https://www.mindspore.cn/docs/en/master/api_python/mindspore.dataset.loading.html#user-defined) for loading customized datasets. diff --git a/tutorials/source_en/beginner/introduction.md b/tutorials/source_en/beginner/introduction.md index 745fedf41897cac53826d74fe64d956074403e05..06a15e52efee87002426299f9646e1eb91c8f708 100644 --- a/tutorials/source_en/beginner/introduction.md +++ b/tutorials/source_en/beginner/introduction.md @@ -16,7 +16,7 @@ The overall architecture of MindSpore is as follows: 2. Deep Learning + Scientific Computing: Provides developers with various Python interfaces required for AI model development, maximizing compatibility with developers' habits in the Python ecosystem; 3. Core: As the core of the AI framework, it builds the Tensor data structure, basic operation operators, autograd module for automatic differentiation, Parallel module for parallel computing, compile capabilities, and runtime management module. -![arch](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_en/design/images/arch_en.png) +![arch](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_en/features/images/arch_en.png) ### Design Philosophy diff --git a/tutorials/source_en/beginner/quick_start.md b/tutorials/source_en/beginner/quick_start.md index 2abc001ea0a3277f8e4336f928d03602e7edce73..268e8a6461130f30230cf1cf080c671f7d07381d 100644 --- a/tutorials/source_en/beginner/quick_start.md +++ b/tutorials/source_en/beginner/quick_start.md @@ -15,7 +15,7 @@ from mindspore.dataset import MnistDataset ## Processing a Dataset -MindSpore provides Pipeline-based [Data Engine](https://www.mindspore.cn/docs/zh-CN/master/design/data_engine.html) and achieves efficient data preprocessing through [Data Loading and Processing](https://www.mindspore.cn/tutorials/en/master/beginner/dataset.html). In this tutorial, we use the Mnist dataset and pre-process dataset by using the data transformations provided by `mindspore.dataset`, after automatically downloaded. +MindSpore provides Pipeline-based [Data Engine](https://www.mindspore.cn/docs/zh-CN/master/features/data_engine.html) and achieves efficient data preprocessing through [Data Loading and Processing](https://www.mindspore.cn/tutorials/en/master/beginner/dataset.html). In this tutorial, we use the Mnist dataset and pre-process dataset by using the data transformations provided by `mindspore.dataset`, after automatically downloaded. > The sample code in this chapter relies on `download`, which can be installed by using the command `pip install download`. If this document is run as Notebook, you need to restart the kernel after installation to execute subsequent code. diff --git a/tutorials/source_en/beginner/save_load.md b/tutorials/source_en/beginner/save_load.md index c92ba00681dc3b6a6adac9ac58eeb495d4bd8a0d..259b4e4f13fc30f1f9cee248bf3703c6b79b009c 100644 --- a/tutorials/source_en/beginner/save_load.md +++ b/tutorials/source_en/beginner/save_load.md @@ -52,7 +52,7 @@ print(param_not_load) ## Saving and Loading MindIR -In addition to Checkpoint, MindSpore provides a unified [Intermediate Representation (IR)](https://www.mindspore.cn/docs/en/master/design/all_scenarios.html#mindspore-ir-mindir) for cloud side (training) and end side (inference). Models can be saved as MindIR directly by using the `export` interface (only support strict graph mode). +In addition to Checkpoint, MindSpore provides a unified Intermediate Representation (IR) for cloud side (training) and end side (inference). Models can be saved as MindIR directly by using the `export` interface (only support strict graph mode). ```python mindspore.set_context(mode=mindspore.GRAPH_MODE, jit_syntax_level=mindspore.STRICT) diff --git a/tutorials/source_en/compile/static_graph.md b/tutorials/source_en/compile/static_graph.md index 535649e4920cc61bc4c590ebcf1813434eda5279..a6a07380340e912b0e2ca70c534c46cfe867154c 100644 --- a/tutorials/source_en/compile/static_graph.md +++ b/tutorials/source_en/compile/static_graph.md @@ -9,8 +9,7 @@ computation graph, and then the static computation graph is executed. In static graph mode, MindSpore converts Python source code into Intermediate Representation IR by means of source code conversion and optimizes IR graphs on this basis, and finally executes the optimized graphs on hardware devices. MindSpore uses a functional IR based on -graph representations, called MindIR. See [middle representationMindIR](https://www.mindspore.cn/docs/en/master/design/all_scenarios.html#mindspore-ir-mindir) -for details. +graph representations, called MindIR. Currently, there are three main methods for converting Python source code into Intermediate Representation (IR): parsing based on the Abstract Syntax Tree (AST), parsing based on ByteCode, and the method based on operator call tracing (Trace). These three modes differ to some @@ -1532,7 +1531,7 @@ compilation problems can be found in [Network compilation](https://www.mindspore TypeError: The parameters number of the function is 3, but the number of provided arguments is 2. ``` -3. In graph mode, some Python syntax is difficult to convert to [intermediate MindIR](https://www.mindspore.cn/docs/en/master/design/all_scenarios.html#mindspore-ir-mindir) +3. In graph mode, some Python syntax is difficult to convert to intermediate MindIR in graph mode. For Python keywords, there are some keywords that are not supported in graph mode: AsyncFunctionDef, Delete, AnnAssign, AsyncFor, AsyncWith, Match, Try, Import, ImportFrom, Nonlocal, NamedExpr, Set, SetComp, Await, Yield, YieldFrom. If the relevant syntax is used in graph mode, an error message will alert the user. diff --git a/tutorials/source_en/debug/error_analysis/mindir.md b/tutorials/source_en/debug/error_analysis/mindir.md index c659ac823379066ae8132e4c3a110f0b1d5ea3cf..1f27eb91d18a5d2fc012e3ed7d54fdcba8ced707 100644 --- a/tutorials/source_en/debug/error_analysis/mindir.md +++ b/tutorials/source_en/debug/error_analysis/mindir.md @@ -240,7 +240,7 @@ Taking graph `@19_1___main___Net_construct_304` as an example: - Line 23 to 81 indicate the graph structure, which contains several nodes, namely, `CNode`. In this example, there are `Sub`, `Add`, `Mul` defined in the function `__init__`. -The `CNode` ([check the design of ANF-IR](https://www.mindspore.cn/docs/en/master/design/all_scenarios.html#syntax)) information format is as follows: from left to right, the ordinal number, node name - debug_name, operator name - op_name, input node - arg, attributes of the node - primitive_attrs, input and output specifications, source code parsing call stack and other information. Because the ANF graph is a unidirectional acyclic graph, the connection between nodes is displayed only based on the input relationship. The corresponding source code reflects the relationship between the `CNode` and the script source code. For example, line 75 is parsed from `if b`. +The `CNode` information format is as follows: from left to right, the ordinal number, node name - debug_name, operator name - op_name, input node - arg, attributes of the node - primitive_attrs, input and output specifications, source code parsing call stack and other information. Because the ANF graph is a unidirectional acyclic graph, the connection between nodes is displayed only based on the input relationship. The corresponding source code reflects the relationship between the `CNode` and the script source code. For example, line 75 is parsed from `if b`. ```text %[No.]([debug_name]) = [op_name]([arg], ...) primitive_attrs: {[key]: [value], ...} diff --git a/tutorials/source_en/nlp/sentiment_analysis.md b/tutorials/source_en/nlp/sentiment_analysis.md index 6ccd5f0613eb0d5e27e9b909ed127ced2876780b..7b046196e696de89e3bed55664ad5c4257d8f616 100644 --- a/tutorials/source_en/nlp/sentiment_analysis.md +++ b/tutorials/source_en/nlp/sentiment_analysis.md @@ -268,7 +268,7 @@ Word segmentation is performed on the IMDB dataset loaded by the loader, but the - Use the Vocab to convert all tokens to index IDs. - The length of the text sequence is unified. If the length is insufficient, `` is used to supplement the length. If the length exceeds the limit, the excess part is truncated. -Here, the API provided in `mindspore.dataset` is used for preprocessing. The APIs used here are designed for MindSpore high-performance data engines. The operations corresponding to each API are considered as a part of the data pipeline. For details, see [MindSpore Data Engine](https://www.mindspore.cn/docs/en/master/design/data_engine.html). +Here, the API provided in `mindspore.dataset` is used for preprocessing. The APIs used here are designed for MindSpore high-performance data engines. The operations corresponding to each API are considered as a part of the data pipeline. For details, see [MindSpore Data Engine](https://www.mindspore.cn/docs/en/master/features/data_engine.html). For the table query operation from a token to an index ID, use the `text.Lookup` API to load the built vocabulary and specify `unknown_token`. The [PadEnd](https://www.mindspore.cn/docs/en/master/api_python/dataset_transforms/mindspore.dataset.transforms.PadEnd.html) API is used to unify the length of the text sequence. This API defines the maximum length and padding value (`pad_value`). In this example, the maximum length is 500, and the padding value corresponds to the index ID of `` in the vocabulary. diff --git a/tutorials/source_en/orange_pi/dev_start.md b/tutorials/source_en/orange_pi/dev_start.md index dff58dae22bb4e953f60141267ee8f05dd72bde5..9d88c7676cfc91428e73990042d96b97bd51aa49 100644 --- a/tutorials/source_en/orange_pi/dev_start.md +++ b/tutorials/source_en/orange_pi/dev_start.md @@ -35,7 +35,7 @@ from mindspore.dataset import MnistDataset ## Preparing and Loading Dataset -MindSpore provides a Pipeline-based [data engine](https://www.mindspore.cn/docs/en/master/design/data_engine.html) to realize efficient data preprocessing through [data loading and processing](https://www.mindspore.cn/tutorials/en/master/beginner/dataset.html) to realize efficient data preprocessing. In this case, we use the Mnist dataset, which is automatically downloaded and then preprocessed using the data transforms provided by `mindspore.dataset`. +MindSpore provides a Pipeline-based [data engine](https://www.mindspore.cn/docs/en/master/features/data_engine.html) to realize efficient data preprocessing through [data loading and processing](https://www.mindspore.cn/tutorials/en/master/beginner/dataset.html) to realize efficient data preprocessing. In this case, we use the Mnist dataset, which is automatically downloaded and then preprocessed using the data transforms provided by `mindspore.dataset`. ```python # install download @@ -368,4 +368,4 @@ The required environment for the operation of this case: | OrangePi AIpro | Image | CANN Toolkit/Kernels | MindSpore | | :----:| :----: | :----:| :----: | -| 8T 16G | Ubuntu | 8.1.RC1| 2.6.0 | \ No newline at end of file +| 8T 16G | Ubuntu | 8.1.RC1| 2.6.0 | diff --git a/tutorials/source_en/parallel/multiple_copy.md b/tutorials/source_en/parallel/multiple_copy.md index 83a665400059791aa050d7530e207a25cba241e4..09236e288cb48b2f58bdd5c98220393cf3019aad 100644 --- a/tutorials/source_en/parallel/multiple_copy.md +++ b/tutorials/source_en/parallel/multiple_copy.md @@ -12,7 +12,7 @@ Usage Scenario: When there is model parallel in semi-automatic mode as well as i The data of input model is sliced according to the batch size dimension, thus modifying the existing single-copy form into a multi-copy form, so that when the underlying layer is communicating, the other copy carries out the computational operation without waiting, which ensures that the computation and communication times of multi-copy complement each other and improve the model performance. At the same time, splitting the data into a multi-copy form also reduces the number of parameter of the operator inputs and reduces the computation time of a single operator, which is helpful in improving the model performance. -![Multi-copy parallel](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_zh_cn/design/images/multi_copy.png) +![Multi-copy parallel](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_zh_cn/features/images/multi_copy.png) ### Related Interfaces diff --git a/tutorials/source_zh_cn/beginner/dataset.ipynb b/tutorials/source_zh_cn/beginner/dataset.ipynb index 3526f5aa076c2d260b91486e003813eb2c4b8495..d338b21210820df148e8a68b00ec2d6bbd4c65ec 100644 --- a/tutorials/source_zh_cn/beginner/dataset.ipynb +++ b/tutorials/source_zh_cn/beginner/dataset.ipynb @@ -19,7 +19,7 @@ "\n", "数据是深度学习的基础,高质量的数据输入将在整个深度神经网络中起到积极作用。\n", "\n", - "MindSpore提供基于Pipeline的[数据引擎](https://www.mindspore.cn/docs/zh-CN/master/design/data_engine.html),通过 `数据集(Dataset)`、`数据变换(Transforms)`和`数据批处理(Batch)`,可以实现高效的数据预处理。其中:\n", + "MindSpore提供基于Pipeline的[数据引擎](https://www.mindspore.cn/docs/zh-CN/master/features/data_engine.html),通过 `数据集(Dataset)`、`数据变换(Transforms)`和`数据批处理(Batch)`,可以实现高效的数据预处理。其中:\n", "\n", "1. 数据集(Dataset)是Pipeline的起始,用于从存储中加载原始数据至内存。`mindspore.dataset`提供了内置的图像、文本、音频等[数据集加载接口](https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.dataset.loading.html#),同时支持[自定义数据集加载接口](https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.dataset.loading.html#%E8%87%AA%E5%AE%9A%E4%B9%89%E6%95%B0%E6%8D%AE%E9%9B%86%E5%8A%A0%E8%BD%BD-1);\n", "\n", diff --git a/tutorials/source_zh_cn/beginner/introduction.ipynb b/tutorials/source_zh_cn/beginner/introduction.ipynb index 70504f4a1a2a72b6f15fc7b8bb934f702cbc2c39..d506a22f37fb291ea62eebad1c635b02bb5fdd25 100644 --- a/tutorials/source_zh_cn/beginner/introduction.ipynb +++ b/tutorials/source_zh_cn/beginner/introduction.ipynb @@ -31,7 +31,7 @@ "2. 深度学习+科学计算:为开发者提供AI模型开发所需各类Python接口,最大化保持开发者在Python生态开发的使用习惯;\n", "3. 核心:作为AI框架的核心,构建Tensor数据结构、基础运算算子Operator、自动求导autograd模块、并行计算Parallel模块、编译compile能力以及runtime运行时管理模块。\n", "\n", - "![arch](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_zh_cn/design/images/arch_zh.png)\n", + "![arch](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindspore/source_zh_cn/features/images/arch_zh.png)\n", "\n", "### 设计理念\n", "\n", diff --git a/tutorials/source_zh_cn/beginner/quick_start.ipynb b/tutorials/source_zh_cn/beginner/quick_start.ipynb index cb0660d243ac6c8c8f1c50cc725fdd67e2286e44..d213d253b8ff08f095172cab763f009ea5c6781a 100644 --- a/tutorials/source_zh_cn/beginner/quick_start.ipynb +++ b/tutorials/source_zh_cn/beginner/quick_start.ipynb @@ -36,7 +36,7 @@ "source": [ "## 处理数据集\n", "\n", - "MindSpore提供基于Pipeline的[数据引擎](https://www.mindspore.cn/docs/zh-CN/master/design/data_engine.html),通过[数据集(Dataset)](https://www.mindspore.cn/tutorials/zh-CN/master/beginner/dataset.html)实现高效的数据预处理。在本教程中,我们使用Mnist数据集,自动下载完成后,使用`mindspore.dataset`提供的数据变换进行预处理。\n", + "MindSpore提供基于Pipeline的[数据引擎](https://www.mindspore.cn/docs/zh-CN/master/features/data_engine.html),通过[数据集(Dataset)](https://www.mindspore.cn/tutorials/zh-CN/master/beginner/dataset.html)实现高效的数据预处理。在本教程中,我们使用Mnist数据集,自动下载完成后,使用`mindspore.dataset`提供的数据变换进行预处理。\n", "\n", "> 本章节中的示例代码依赖`download`,可使用命令`pip install download`安装。如本文档以Notebook运行时,完成安装后需要重启kernel才能执行后续代码。" ] diff --git a/tutorials/source_zh_cn/beginner/save_load.ipynb b/tutorials/source_zh_cn/beginner/save_load.ipynb index 9badb402548f98efc68f41044af3f414594d85d3..d31bb8eea8a04325c76f0574a7de5480a7867ed1 100644 --- a/tutorials/source_zh_cn/beginner/save_load.ipynb +++ b/tutorials/source_zh_cn/beginner/save_load.ipynb @@ -136,7 +136,7 @@ "source": [ "## 保存和加载MindIR\n", "\n", - "除Checkpoint外,MindSpore提供了云侧(训练)和端侧(推理)统一的[中间表示(Intermediate Representation,IR)](https://www.mindspore.cn/docs/zh-CN/master/design/all_scenarios.html#中间表示mindir)。可使用`export`接口直接将模型保存为MindIR(当前仅支持严格图模式)。" + "除Checkpoint外,MindSpore提供了云侧(训练)和端侧(推理)统一的中间表示(Intermediate Representation,IR)。可使用`export`接口直接将模型保存为MindIR(当前仅支持严格图模式)。" ] }, { diff --git a/tutorials/source_zh_cn/compile/static_graph.md b/tutorials/source_zh_cn/compile/static_graph.md index cdb5df325782da02f1d10a2978416f1ddc325a7c..f2200e4aba8f4fa78ddfdfe1f9fa5498214343f4 100644 --- a/tutorials/source_zh_cn/compile/static_graph.md +++ b/tutorials/source_zh_cn/compile/static_graph.md @@ -8,7 +8,7 @@ Compilation,JIT)模式下,Python代码并不是由Python解释器直接执行,而是先将代码编译成静态计算图,再执行该静态计算图。 在静态图模式下,MindSpore通过源码转换的方式,将Python的源码转换成中间表达IR(Intermediate -Representation),并在此基础上对IR图进行优化,最终在硬件设备上执行优化后的图。MindSpore使用基于图表示的函数式IR,称为MindIR,详情可参考[中间表示MindIR](https://www.mindspore.cn/docs/zh-CN/master/design/all_scenarios.html#中间表示mindir)。 +Representation),并在此基础上对IR图进行优化,最终在硬件设备上执行优化后的图。MindSpore使用基于图表示的函数式IR,称为MindIR。 目前,将Python源码转换为中间表示(IR)的方法主要有三种:基于抽象语法树(Abstract Syntax Tree, @@ -1437,7 +1437,7 @@ in-place操作是指直接修改输入张量的内容,而不创建新的张量 TypeError: The parameters number of the function is 3, but the number of provided arguments is 2. ``` -3. 在图模式下,有些Python语法难以转换成图模式下的[中间表示MindIR](https://www.mindspore.cn/docs/zh-CN/master/design/all_scenarios.html#中间表示mindir)。对标Python的关键字,存在部分关键字在图模式下是不支持的:AsyncFunctionDef、Delete、AnnAssign、AsyncFor、AsyncWith、Match、Try、Import、ImportFrom、Nonlocal、NamedExpr、Set、SetComp、Await、Yield、YieldFrom。如果在图模式下使用相关的语法,将会有相应的报错信息提醒用户。 +3. 在图模式下,有些Python语法难以转换成图模式下的中间表示MindIR。对标Python的关键字,存在部分关键字在图模式下是不支持的:AsyncFunctionDef、Delete、AnnAssign、AsyncFor、AsyncWith、Match、Try、Import、ImportFrom、Nonlocal、NamedExpr、Set、SetComp、Await、Yield、YieldFrom。如果在图模式下使用相关的语法,将会有相应的报错信息提醒用户。 如果使用Try语句,示例如下: diff --git a/tutorials/source_zh_cn/debug/error_analysis/mindir.md b/tutorials/source_zh_cn/debug/error_analysis/mindir.md index 945cddebe235d36953555b8c7249899543c958ed..aa604459f9d1f377cccaff963e2204e227bf0c0e 100644 --- a/tutorials/source_zh_cn/debug/error_analysis/mindir.md +++ b/tutorials/source_zh_cn/debug/error_analysis/mindir.md @@ -240,7 +240,7 @@ print(out) - 第23-81行展示了图结构的信息,图中含有若干个节点,即`CNode`。该图包含`Sub`、`Add`、`Mul`这些在网路所调用的接口中所用到的算子。 -`CNode`([ANF-IR的设计请查看](https://www.mindspore.cn/docs/zh-CN/master/design/all_scenarios.html#文法定义))的信息遵循如下格式,从左到右分别为序号、节点名称-debug_name、算子名称-op_name、输入节点-arg、节点的属性-primitive_attrs、输入和输出的规格、源码解析调用栈等信息。 +`CNode`的信息遵循如下格式,从左到右分别为序号、节点名称-debug_name、算子名称-op_name、输入节点-arg、节点的属性-primitive_attrs、输入和输出的规格、源码解析调用栈等信息。 由于ANF图为单向无环图,所以此处仅根据输入关系来体现节点与节点的连接关系。关联代码行则体现了`CNode`与脚本源码之间的关系,例如第75行表明该节点是由脚本中`if b`这一行解析而来。 ```text diff --git a/tutorials/source_zh_cn/nlp/sentiment_analysis.ipynb b/tutorials/source_zh_cn/nlp/sentiment_analysis.ipynb index e9efdd369ba1d75a8200c0da7b2c7d64815d446a..ad3f2af12aff67a0a5914f18f645f37500c866f3 100644 --- a/tutorials/source_zh_cn/nlp/sentiment_analysis.ipynb +++ b/tutorials/source_zh_cn/nlp/sentiment_analysis.ipynb @@ -463,7 +463,7 @@ "- 通过Vocab将所有的Token处理为index id。\n", "- 将文本序列统一长度,不足的使用``补齐,超出的进行截断。\n", "\n", - "这里我们使用`mindspore.dataset`中提供的接口进行预处理操作。这里使用到的接口均为MindSpore的高性能数据引擎设计,每个接口对应操作视作数据流水线的一部分,详情请参考[MindSpore数据引擎](https://www.mindspore.cn/docs/zh-CN/master/design/data_engine.html)。\n", + "这里我们使用`mindspore.dataset`中提供的接口进行预处理操作。这里使用到的接口均为MindSpore的高性能数据引擎设计,每个接口对应操作视作数据流水线的一部分,详情请参考[MindSpore数据引擎](https://www.mindspore.cn/docs/zh-CN/master/features/data_engine.html)。\n", "首先针对token到index id的查表操作,使用`text.Lookup`接口,将前文构造的词表加载,并指定`unknown_token`。其次为文本序列统一长度操作,使用[PadEnd](https://www.mindspore.cn/docs/zh-CN/master/api_python/dataset_transforms/mindspore.dataset.transforms.PadEnd.html)接口,此接口定义最大长度和补齐值(`pad_value`),这里我们取最大长度为500,填充值对应词表中``的index id。\n", "\n", "> 除了对数据集中`text`进行预处理外,由于后续模型训练的需要,要将`label`数据转为float32格式。" diff --git a/tutorials/source_zh_cn/orange_pi/dev_start.ipynb b/tutorials/source_zh_cn/orange_pi/dev_start.ipynb index 26bb26916c38b83b56d9e241682a7f1380b3fbdf..fc9718a65a0b6287f978ecb4a6d10300f9cbceee 100644 --- a/tutorials/source_zh_cn/orange_pi/dev_start.ipynb +++ b/tutorials/source_zh_cn/orange_pi/dev_start.ipynb @@ -71,7 +71,7 @@ "source": [ "## 数据集准备与加载\n", "\n", - "MindSpore提供基于Pipeline的[数据引擎](https://www.mindspore.cn/docs/zh-CN/master/design/data_engine.html),通过[数据集(Dataset)](https://www.mindspore.cn/tutorials/zh-CN/master/beginner/dataset.html)实现高效的数据预处理。在本案例中,我们使用Mnist数据集,自动下载完成后,使用`mindspore.dataset`提供的数据变换进行预处理。\n" + "MindSpore提供基于Pipeline的[数据引擎](https://www.mindspore.cn/docs/zh-CN/master/features/data_engine.html),通过[数据集(Dataset)](https://www.mindspore.cn/tutorials/zh-CN/master/beginner/dataset.html)实现高效的数据预处理。在本案例中,我们使用Mnist数据集,自动下载完成后,使用`mindspore.dataset`提供的数据变换进行预处理。\n" ] }, {