diff --git a/docs/mindspore/source_en/faq/operators_compile.md b/docs/mindspore/source_en/faq/operators_compile.md index 7152668e4610dace81a029f756dbfd0386d5e0df..0aadd2ba77d7bffc874a4402347dfacbe1bd34f2 100644 --- a/docs/mindspore/source_en/faq/operators_compile.md +++ b/docs/mindspore/source_en/faq/operators_compile.md @@ -7,7 +7,7 @@ A: The `shape` of the [ops.concat](https://www.mindspore.cn/docs/en/r2.7.0/api_python/ops/mindspore.ops.concat.html) operator is too large. You are advised to set the output to `numpy` when creating an iterator for the `dataset` object. The setting is as follows: ```python -gallaryloader.create_dict_iterator(output_numpy=True) +galleryloader.create_dict_iterator(output_numpy=True) ``` In the post-processing phase (in a non-network calculation process, that is, in a non-`construct` function), `numpy` can be directly used for computation. For example, `numpy.concatenate` is used to replace the `ops.concat` for computation. diff --git a/docs/mindspore/source_en/faq/performance_tuning.md b/docs/mindspore/source_en/faq/performance_tuning.md index 534e99e9bae93a53d58b42bae81247cd6768fb35..5c737e765f4959e256093ab25aff7781bef52ada 100644 --- a/docs/mindspore/source_en/faq/performance_tuning.md +++ b/docs/mindspore/source_en/faq/performance_tuning.md @@ -9,5 +9,4 @@ A: The `scipy 1.4` series versions may be used in the environment. Run the `pip ## Q: How to choose the batchsize to achieve the best performance when training models on the Ascend chip? -A: When training the model on the Ascend chip, better training performance can be obtained when the batchsize is equal to the number of AI CORE or multiples. The number of AI CORE can be queried via the command line in the link. - +A: When training the model on the Ascend chip, better training performance can be obtained when the batchsize is equal to the number of AI CORE or multiples. The number of AI CORE can be queried via the command line in the [link](https://support.huawei.com/enterprise/zh/doc/EDOC1100206828/eedfacda). diff --git a/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md b/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md index c54072511261c435b9dbaba49cedf43aa1d7fa07..faaabf67e6f244dbd192396017ad3509e4b69753 100644 --- a/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md +++ b/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md @@ -261,7 +261,7 @@ Because of the framework mechanism, MindSpore does not provide the following par | [torch.distributed.batch_isend_irecv](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.batch_isend_irecv) | [mindspore.mint.distributed.batch_isend_irecv](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.distributed.batch_isend_irecv.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | | [torch.distributed.broadcast](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.broadcast) | [mindspore.mint.distributed.broadcast](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.distributed.broadcast.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| | [torch.distributed.broadcast_object_list](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.broadcast_object_list) | [mindspore.mint.distributed.broadcast_object_list](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.distributed.broadcast_object_list.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| -| []() | [mindspore.mint.distributed.destroy_process_group](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.distributed.destroy_process_group.html) | Unique to MindSpore| +| - | [mindspore.mint.distributed.destroy_process_group](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.distributed.destroy_process_group.html) | Unique to MindSpore| | [torch.distributed.gather](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.gather) | [mindspore.mint.distributed.gather](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.distributed.gather.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| | [torch.distributed.gather_object](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.gather_object) | [mindspore.mint.distributed.gather_object](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.distributed.gather_object.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | | [torch.distributed.get_backend](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.get_backend) | [mindspore.mint.distributed.get_backend](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.distributed.get_backend.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | @@ -286,15 +286,15 @@ Because of the framework mechanism, MindSpore does not provide the following par | PyTorch 2.1 APIs | MindSpore APIs | Descriptions | | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -| [torch.nn.AdaptiveAvgPool1d](https://PyTorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool1d.html) | [mindspore.mint.nn.AdaptiveAvgPool1d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool1d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| -| [torch.nn.AdaptiveAvgPool2d](https://PyTorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool2d.html) | [mindspore.mint.nn.AdaptiveAvgPool2d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool2d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| -| [torch.nn.AdaptiveAvgPool3d](https://PyTorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool3d.html) | [mindspore.mint.nn.AdaptiveAvgPool3d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool3d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| -| [torch.nn.AvgPool2d](https://PyTorch.org/docs/2.1/generated/torch.nn.AvgPool2d.html) | [mindspore.mint.nn.AvgPool2d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.AvgPool2d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | -| [torch.nn.BCELoss](https://PyTorch.org/docs/2.1/generated/torch.nn.BCELoss.html) | [mindspore.mint.nn.BCELoss](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.BCELoss.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | +| [torch.nn.AdaptiveAvgPool1d](https://pytorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool1d.html) | [mindspore.mint.nn.AdaptiveAvgPool1d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool1d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| +| [torch.nn.AdaptiveAvgPool2d](https://pytorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool2d.html) | [mindspore.mint.nn.AdaptiveAvgPool2d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool2d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| +| [torch.nn.AdaptiveAvgPool3d](https://pytorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool3d.html) | [mindspore.mint.nn.AdaptiveAvgPool3d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool3d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| +| [torch.nn.AvgPool2d](https://pytorch.org/docs/2.1/generated/torch.nn.AvgPool2d.html) | [mindspore.mint.nn.AvgPool2d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.AvgPool2d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | +| [torch.nn.BCELoss](https://pytorch.org/docs/2.1/generated/torch.nn.BCELoss.html) | [mindspore.mint.nn.BCELoss](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.BCELoss.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | | [torch.nn.BCEWithLogitsLoss](https://pytorch.org/docs/2.1/generated/torch.nn.BCEWithLogitsLoss.html) | [mindspore.mint.nn.BCEWithLogitsLoss](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.BCEWithLogitsLoss.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | -| [torch.nn.BatchNorm1d](https://PyTorch.org/docs/2.1/generated/torch.nn.BatchNorm1d.html) | [mindspore.mint.nn.BatchNorm1d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.BatchNorm1d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| -| [torch.nn.BatchNorm2d](https://PyTorch.org/docs/2.1/generated/torch.nn.BatchNorm2d.html) | [mindspore.mint.nn.BatchNorm2d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.BatchNorm2d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| -| [torch.nn.BatchNorm3d](https://PyTorch.org/docs/2.1/generated/torch.nn.BatchNorm3d.html) | [mindspore.mint.nn.BatchNorm3d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.BatchNorm3d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| +| [torch.nn.BatchNorm1d](https://pytorch.org/docs/2.1/generated/torch.nn.BatchNorm1d.html) | [mindspore.mint.nn.BatchNorm1d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.BatchNorm1d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| +| [torch.nn.BatchNorm2d](https://pytorch.org/docs/2.1/generated/torch.nn.BatchNorm2d.html) | [mindspore.mint.nn.BatchNorm2d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.BatchNorm2d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| +| [torch.nn.BatchNorm3d](https://pytorch.org/docs/2.1/generated/torch.nn.BatchNorm3d.html) | [mindspore.mint.nn.BatchNorm3d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.BatchNorm3d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| | [torch.nn.ConstantPad1d](https://pytorch.org/docs/2.1/generated/torch.nn.ConstantPad1d.html) | [mindspore.mint.nn.ConstantPad1d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.ConstantPad1d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | | [torch.nn.ConstantPad2d](https://pytorch.org/docs/2.1/generated/torch.nn.ConstantPad2d.html) | [mindspore.mint.nn.ConstantPad2d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.ConstantPad2d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | | [torch.nn.ConstantPad3d](https://pytorch.org/docs/2.1/generated/torch.nn.ConstantPad3d.html) | [mindspore.mint.nn.ConstantPad3d](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.ConstantPad3d.html) | [Consistent](https://www.mindspore.cn/docs/en/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | diff --git a/docs/mindspore/source_zh_cn/faq/operators_compile.md b/docs/mindspore/source_zh_cn/faq/operators_compile.md index 19fb15f2d996422ea87a524c89022e6ae06103c9..641ed108440d34708bcab6ed9b0d0465286364fe 100644 --- a/docs/mindspore/source_zh_cn/faq/operators_compile.md +++ b/docs/mindspore/source_zh_cn/faq/operators_compile.md @@ -7,7 +7,7 @@ A: 这种报错,主要为[ops.concat](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/ops/mindspore.ops.concat.html)算子提示`shape`过大。建议对`dataset`对象创建迭代器时可设置输出为`numpy`, 如下设置: ```python -gallaryloader.create_dict_iterator(output_numpy=True) +galleryloader.create_dict_iterator(output_numpy=True) ``` 另外在上述后处理环节(非网络计算过程中,即非`construct`函数里面),可以采用`numpy`直接计算,如采用`numpy.concatenate`代替上述`ops.concat`进行计算。 @@ -22,7 +22,7 @@ A: 建议使用[ops.clip_by_value](https://www.mindspore.cn/docs/zh-CN/r2.7.0/ap ## Q: `TransData`算子的功能是什么,能否优化性能? -A: `TransData`算子出现的场景是: 如果网络中相互连接的算子使用的数据格式不一致(如NC1HWC0),框架就会自动插入`transdata`算子使其转换成一致的数据格式,然后再进行计算。华为Ascend支持5D格式运算,通过`transdata`算子将数据由4D转为5D以提升性能。 +A: `TransData`算子出现的场景是: 如果网络中相互连接的算子使用的数据格式不一致(如NC1HWC0),框架就会自动插入`transdata`算子使其转换成一致的数据格式,然后再进行计算。华为Ascend支持5D格式运算,通过`TransData`算子将数据由4D转为5D以提升性能。
@@ -50,7 +50,7 @@ A: 可以使用mindspore.Tensor.var接口计算Tensor的方差,你可以参考
-## Q: `nn.Embedding`层与PyTorch相比缺少了`Padding`操作,有其余的算子可以实现吗? +## Q: `nn.Embedding`层与PyTorch相比缺少了`Padding`操作,有其他算子可以实现吗? A: 在PyTorch中`padding_idx`的作用是将embedding矩阵中`padding_idx`位置的词向量置为0,并且反向传播时不会更新`padding_idx`位置的词向量。在MindSpore中,可以手动将embedding的`padding_idx`位置对应的权重初始化为0,并且在训练时通过`mask`的操作,过滤掉`padding_idx`位置对应的`Loss`。 @@ -59,11 +59,11 @@ A: 在PyTorch中`padding_idx`的作用是将embedding矩阵中`padding_idx`位 ## Q: Operations中`Tile`算子执行到`__infer__`时`value`值为`None`,丢失了数值是怎么回事? A: `Tile`算子的`multiples input`必须是一个常量(该值不能直接或间接来自于图的输入)。否则构图的时候会拿到一个`None`的数据,因为图的输入是在图执行的时候才传下去的,构图的时候拿不到图的输入数据。 -相关的资料可以看[静态图语法支持](https://www.mindspore.cn/tutorials/zh-CN/r2.7.0/compile/static_graph.html)。 +相关的资料可参考[静态图语法支持](https://www.mindspore.cn/tutorials/zh-CN/r2.7.0/compile/static_graph.html)。
-## Q: 使用conv2d算子将卷积核设置为(3,10),Tensor设置为[2,2,10,10],在ModelArts上利用Ascend跑,报错: `FM_W+pad_left+pad_right-KW>=strideW`,而CPU下不报错,怎么回事? +## Q: 使用conv2d算子将卷积核设置为(3,10),Tensor设置为[2,2,10,10],在ModelArts上利用Ascend跑,报错:`FM_W+pad_left+pad_right-KW>=strideW`,而CPU下不报错,怎么回事? A: TBE(Tensor Boost Engine)算子是华为自研的Ascend算子开发工具,在TVM框架基础上扩展,进行自定义算子开发。上述问题是这个TBE算子的限制,x的width必须大于kernel的width。CPU的这个算子没有这个限制,所以不报错。 @@ -71,13 +71,13 @@ A: TBE(Tensor Boost Engine)算子是华为自研的Ascend算子开发工具, ## Q: 请问MindSpore实现了反池化操作了吗?类似于`nn.MaxUnpool2d` 这个反池化操作? -A: 目前 MindSpore 还没有反池化相关的接口。用户可以通过自定义算子的方式自行开发算子,详情请见[自定义算子](https://www.mindspore.cn/tutorials/zh-CN/r2.7.0/custom_program/op_custom.html)。 +A: 目前 MindSpore 暂无反池化相关的接口。用户可以通过自定义算子的方式自行开发算子,详情请见[自定义算子](https://www.mindspore.cn/tutorials/zh-CN/r2.7.0/custom_program/op_custom.html)。
## Q: Ascend环境上,一些尽管经过调优工具调试过的算子,性能依旧很差,这时候该怎么办? -A: 遇到这种情况, +A: 解决方案如下: 1. 看一下这些算子是否为融合算子。因为算子预编译可能会改变算子的fusion_type属性,而该属性会影响算子的融合,导致原本不应该融合的小算子融合成了大算子,这些融合出来的大算子性能不一定比小算子性能好。 diff --git a/docs/mindspore/source_zh_cn/faq/performance_tuning.md b/docs/mindspore/source_zh_cn/faq/performance_tuning.md index 252b2f23087518e195da744f35b5df8c18a239ac..311e9a8cb2a68870bca4fe322ee2b033ae22a651 100644 --- a/docs/mindspore/source_zh_cn/faq/performance_tuning.md +++ b/docs/mindspore/source_zh_cn/faq/performance_tuning.md @@ -2,7 +2,7 @@ [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0/docs/mindspore/source_zh_cn/faq/performance_tuning.md) -## Q: MindSpore安装完成,执行训练时发现网络性能异常,权重初始化耗时过长,怎么办? +## Q: MindSpore安装完成,执行训练时发现网络性能异常,权重初始化耗时过长,怎么办? A:可能与环境中使用了`scipy 1.4`系列版本有关,通过`pip list | grep scipy`命令可查看scipy版本,建议改成MindSpore要求的`scipy`版本。版本第三方库依赖可以在`requirement.txt`中查看。 @@ -11,5 +11,4 @@ A:可能与环境中使用了`scipy 1.4`系列版本有关,通过`pip list | ## Q: 在昇腾芯片上进行模型训练时,如何选择batchsize达到最佳性能效果? -A:在昇腾芯片上进行模型训练时,在batchsize等于AI CORE个数或倍数的情况下可以获取更好的训练性能。AI CORE个数可通过链接中的命令行进行查询。 - +A:在昇腾芯片上进行模型训练时,在batch_size等于AI CORE个数或倍数的情况下可以获取更好的训练性能。AI CORE个数可通过[链接](https://support.huawei.com/enterprise/zh/doc/EDOC1100206828/eedfacda)中的命令行进行查询。 diff --git a/docs/mindspore/source_zh_cn/faq/precision_tuning.md b/docs/mindspore/source_zh_cn/faq/precision_tuning.md index afe41e92b210d4e05dc78e2f0d8b7bfac5ec323b..21a9087f98e252bd567fecc832ea2e20bfddaa58 100644 --- a/docs/mindspore/source_zh_cn/faq/precision_tuning.md +++ b/docs/mindspore/source_zh_cn/faq/precision_tuning.md @@ -4,7 +4,7 @@ ## Q: 导致Loss值不收敛或者精度不达标的原因有哪些呢,应该怎样定位调优? -A: 可能导致Loss值不收敛或者精度问题的原因很多,推荐参考下面总结,逐一排查问题。 +A: 可能导致Loss值不收敛或精度问题的原因很多,推荐参考下面总结,逐一排查问题。 [MindSpore模型精度调优实战(一)精度问题的常见现象、原因和简要调优思路](https://www.hiascend.com/developer/blog/details/0215121673876901029) diff --git a/docs/mindspore/source_zh_cn/note/api_mapping/pytorch_api_mapping.md b/docs/mindspore/source_zh_cn/note/api_mapping/pytorch_api_mapping.md index 418a61dc12fc98e3a8f6eaf112839e4f138f2c1e..36282ffb85448aa09a4c517784e975fd4fad0dc5 100644 --- a/docs/mindspore/source_zh_cn/note/api_mapping/pytorch_api_mapping.md +++ b/docs/mindspore/source_zh_cn/note/api_mapping/pytorch_api_mapping.md @@ -261,7 +261,7 @@ mindspore.mint.argmax只有一种API形式,即mindspore.mint.argmax(input, dim | [torch.distributed.batch_isend_irecv](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.batch_isend_irecv) | [mindspore.mint.distributed.batch_isend_irecv](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.distributed.batch_isend_irecv.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | | [torch.distributed.broadcast](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.broadcast) | [mindspore.mint.distributed.broadcast](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.distributed.broadcast.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| | [torch.distributed.broadcast_object_list](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.broadcast_object_list) | [mindspore.mint.distributed.broadcast_object_list](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.distributed.broadcast_object_list.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| -| []() | [mindspore.mint.distributed.destroy_process_group](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.distributed.destroy_process_group.html) | MindSpore独有| +| - | [mindspore.mint.distributed.destroy_process_group](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.distributed.destroy_process_group.html) | MindSpore独有| | [torch.distributed.gather](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.gather) | [mindspore.mint.distributed.gather](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.distributed.gather.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| | [torch.distributed.gather_object](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.gather_object) | [mindspore.mint.distributed.gather_object](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.distributed.gather_object.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | | [torch.distributed.get_backend](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.get_backend) | [mindspore.mint.distributed.get_backend](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.distributed.get_backend.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | @@ -286,15 +286,15 @@ mindspore.mint.argmax只有一种API形式,即mindspore.mint.argmax(input, dim | PyTorch 2.1 APIs | MindSpore APIs | 说明 | | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -| [torch.nn.AdaptiveAvgPool1d](https://PyTorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool1d.html) | [mindspore.mint.nn.AdaptiveAvgPool1d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool1d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| -| [torch.nn.AdaptiveAvgPool2d](https://PyTorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool2d.html) | [mindspore.mint.nn.AdaptiveAvgPool2d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool2d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| -| [torch.nn.AdaptiveAvgPool3d](https://PyTorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool3d.html) | [mindspore.mint.nn.AdaptiveAvgPool3d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool3d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| -| [torch.nn.AvgPool2d](https://PyTorch.org/docs/2.1/generated/torch.nn.AvgPool2d.html) | [mindspore.mint.nn.AvgPool2d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.AvgPool2d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | -| [torch.nn.BCELoss](https://PyTorch.org/docs/2.1/generated/torch.nn.BCELoss.html) | [mindspore.mint.nn.BCELoss](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.BCELoss.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | +| [torch.nn.AdaptiveAvgPool1d](https://pytorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool1d.html) | [mindspore.mint.nn.AdaptiveAvgPool1d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool1d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| +| [torch.nn.AdaptiveAvgPool2d](https://pytorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool2d.html) | [mindspore.mint.nn.AdaptiveAvgPool2d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool2d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| +| [torch.nn.AdaptiveAvgPool3d](https://pytorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool3d.html) | [mindspore.mint.nn.AdaptiveAvgPool3d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool3d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| +| [torch.nn.AvgPool2d](https://pytorch.org/docs/2.1/generated/torch.nn.AvgPool2d.html) | [mindspore.mint.nn.AvgPool2d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.AvgPool2d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | +| [torch.nn.BCELoss](https://pytorch.org/docs/2.1/generated/torch.nn.BCELoss.html) | [mindspore.mint.nn.BCELoss](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.BCELoss.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | | [torch.nn.BCEWithLogitsLoss](https://pytorch.org/docs/2.1/generated/torch.nn.BCEWithLogitsLoss.html) | [mindspore.mint.nn.BCEWithLogitsLoss](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.BCEWithLogitsLoss.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | -| [torch.nn.BatchNorm1d](https://PyTorch.org/docs/2.1/generated/torch.nn.BatchNorm1d.html) | [mindspore.mint.nn.BatchNorm1d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.BatchNorm1d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| -| [torch.nn.BatchNorm2d](https://PyTorch.org/docs/2.1/generated/torch.nn.BatchNorm2d.html) | [mindspore.mint.nn.BatchNorm2d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.BatchNorm2d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| -| [torch.nn.BatchNorm3d](https://PyTorch.org/docs/2.1/generated/torch.nn.BatchNorm3d.html) | [mindspore.mint.nn.BatchNorm3d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.BatchNorm3d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| +| [torch.nn.BatchNorm1d](https://pytorch.org/docs/2.1/generated/torch.nn.BatchNorm1d.html) | [mindspore.mint.nn.BatchNorm1d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.BatchNorm1d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| +| [torch.nn.BatchNorm2d](https://pytorch.org/docs/2.1/generated/torch.nn.BatchNorm2d.html) | [mindspore.mint.nn.BatchNorm2d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.BatchNorm2d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| +| [torch.nn.BatchNorm3d](https://pytorch.org/docs/2.1/generated/torch.nn.BatchNorm3d.html) | [mindspore.mint.nn.BatchNorm3d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.BatchNorm3d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| | [torch.nn.ConstantPad1d](https://pytorch.org/docs/2.1/generated/torch.nn.ConstantPad1d.html) | [mindspore.mint.nn.ConstantPad1d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.ConstantPad1d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | | [torch.nn.ConstantPad2d](https://pytorch.org/docs/2.1/generated/torch.nn.ConstantPad2d.html) | [mindspore.mint.nn.ConstantPad2d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.ConstantPad2d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | | [torch.nn.ConstantPad3d](https://pytorch.org/docs/2.1/generated/torch.nn.ConstantPad3d.html) | [mindspore.mint.nn.ConstantPad3d](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.ConstantPad3d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/r2.7.0/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | diff --git a/tutorials/source_en/model_infer/ms_infer/ms_infer_quantization.md b/tutorials/source_en/model_infer/ms_infer/ms_infer_quantization.md index 6396881f02dda9d34106632b05cb0390a5576746..aa2fd9f7b28872f23dac4f0a77ff2aa11f144035 100644 --- a/tutorials/source_en/model_infer/ms_infer/ms_infer_quantization.md +++ b/tutorials/source_en/model_infer/ms_infer/ms_infer_quantization.md @@ -6,7 +6,7 @@ MindSpore is an all-scenario AI framework. When a model is deployed on the device or other lightweight devices, it may be subject to memory, power consumption, and latency. Therefore, the model needs to be compressed before deployment. -[MindSpore Golden Stick](https://www.mindspore.cn/golden_stick/docs/en/r1.2.0/index.html) provides the model compression capability of MindSpore. MindSpore Golden Stick is a set of model compression algorithms jointly designed and developed by Huawei Noah's Ark team and Huawei MindSpore team. It provides a series of model compression algorithms for MindSpore, supporting quantization modes such as A16W8, A16W4, A8W8, and KVCache. For details, see [MindSpore Golden Stick](https://www.mindspore.cn/golden_stick/docs/en/r1.2.0/index.html). +[MindSpore Golden Stick](https://www.mindspore.cn/golden_stick/docs/en/r1.2.0/index.html) provides the model compression capability of MindSpore. MindSpore Golden Stick is a set of model compression algorithms jointly designed and developed by Huawei Noah's Ark team and MindSpore team. It provides a series of model compression algorithms for MindSpore, supporting quantization modes such as A16W8, A16W4, A8W8, and KVCache. For details, see [MindSpore Golden Stick](https://www.mindspore.cn/golden_stick/docs/en/r1.2.0/index.html). ## Basic Model Quantization Process diff --git a/tutorials/source_zh_cn/model_infer/ms_infer/ms_infer_quantization.md b/tutorials/source_zh_cn/model_infer/ms_infer/ms_infer_quantization.md index 5c8f69f8b9b77e829a3ff4cc9ddc4c0a18995934..c7ee20f98f3e28a48a34337cdb503f16b87653b9 100644 --- a/tutorials/source_zh_cn/model_infer/ms_infer/ms_infer_quantization.md +++ b/tutorials/source_zh_cn/model_infer/ms_infer/ms_infer_quantization.md @@ -6,7 +6,7 @@ MindSpore是一个全场景的AI框架。当模型部署到端侧或者其他轻量化设备上时,对于部署的内存、功耗、时延等有各种限制,因此在部署前需要对模型进行压缩。 -MindSpore的模型压缩能力由 [MindSpore Golden Stick](https://www.mindspore.cn/golden_stick/docs/zh-CN/r1.2.0/index.html) 提供,MindSpore Golden Stick是华为诺亚团队和华为MindSpore团队联合设计开发的一个模型压缩算法集,为MindSpore提供了一系列模型压缩算法,支持A16W8、A16W4、A8W8和KVCache等量化方式。详细资料可前往 [MindSpore Golden Stick官方资料](https://www.mindspore.cn/golden_stick/docs/zh-CN/r1.2.0/index.html) 查看。 +MindSpore的模型压缩能力由 [MindSpore Golden Stick](https://www.mindspore.cn/golden_stick/docs/zh-CN/r1.2.0/index.html) 提供,MindSpore Golden Stick是华为诺亚团队和MindSpore团队联合设计开发的一个模型压缩算法集,为MindSpore提供了一系列模型压缩算法,支持A16W8、A16W4、A8W8和KVCache等量化方式。详细资料可前往 [MindSpore Golden Stick官方资料](https://www.mindspore.cn/golden_stick/docs/zh-CN/r1.2.0/index.html) 查看。 ## 模型量化基本流程