diff --git a/docs/mindspore/source_en/faq/operators_compile.md b/docs/mindspore/source_en/faq/operators_compile.md index 831a345d1b314b5f0aa9aa20f02218f81f20f7ff..f27a1e805f7a51701fbc9ec18cdd30f44f0e0fb2 100644 --- a/docs/mindspore/source_en/faq/operators_compile.md +++ b/docs/mindspore/source_en/faq/operators_compile.md @@ -7,7 +7,7 @@ A: The `shape` of the [ops.concat](https://www.mindspore.cn/docs/en/master/api_python/ops/mindspore.ops.concat.html) operator is too large. You are advised to set the output to `numpy` when creating an iterator for the `dataset` object. The setting is as follows: ```python -gallaryloader.create_dict_iterator(output_numpy=True) +galleryloader.create_dict_iterator(output_numpy=True) ``` In the post-processing phase (in a non-network calculation process, that is, in a non-`construct` function), `numpy` can be directly used for computation. For example, `numpy.concatenate` is used to replace the `ops.concat` for computation. diff --git a/docs/mindspore/source_en/faq/performance_tuning.md b/docs/mindspore/source_en/faq/performance_tuning.md index 9f428c3df4acab8cd3005b3726884a6b147b8b8d..854d66a9a62f3b46654d150d03ac202e45630f3c 100644 --- a/docs/mindspore/source_en/faq/performance_tuning.md +++ b/docs/mindspore/source_en/faq/performance_tuning.md @@ -9,5 +9,4 @@ A: The `scipy 1.4` series versions may be used in the environment. Run the `pip ## Q: How to choose the batchsize to achieve the best performance when training models on the Ascend chip? -A: When training the model on the Ascend chip, better training performance can be obtained when the batchsize is equal to the number of AI CORE or multiples. The number of AI CORE can be queried via the command line in the link. - +A: When training the model on the Ascend chip, better training performance can be obtained when the batchsize is equal to the number of AI CORE or multiples. The number of AI CORE can be queried via the command line in the [link](https://support.huawei.com/enterprise/zh/doc/EDOC1100206828/eedfacda). diff --git a/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md b/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md index 628ab9a20c3660e46cdc28d5b47777b8956e8e09..39f45290bc1875ce8b84336edc710f3178446ccf 100644 --- a/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md +++ b/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md @@ -261,7 +261,7 @@ Because of the framework mechanism, MindSpore does not provide the following par | [torch.distributed.batch_isend_irecv](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.batch_isend_irecv) | [mindspore.mint.distributed.batch_isend_irecv](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.distributed.batch_isend_irecv.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | | [torch.distributed.broadcast](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.broadcast) | [mindspore.mint.distributed.broadcast](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.distributed.broadcast.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| | [torch.distributed.broadcast_object_list](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.broadcast_object_list) | [mindspore.mint.distributed.broadcast_object_list](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.distributed.broadcast_object_list.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| -| []() | [mindspore.mint.distributed.destroy_process_group](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.distributed.destroy_process_group.html) | Unique to MindSpore| +| - | [mindspore.mint.distributed.destroy_process_group](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.distributed.destroy_process_group.html) | Unique to MindSpore| | [torch.distributed.gather](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.gather) | [mindspore.mint.distributed.gather](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.distributed.gather.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| | [torch.distributed.gather_object](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.gather_object) | [mindspore.mint.distributed.gather_object](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.distributed.gather_object.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | | [torch.distributed.get_backend](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.get_backend) | [mindspore.mint.distributed.get_backend](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.distributed.get_backend.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | @@ -286,15 +286,15 @@ Because of the framework mechanism, MindSpore does not provide the following par | PyTorch 2.1 APIs | MindSpore APIs | Descriptions | | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -| [torch.nn.AdaptiveAvgPool1d](https://PyTorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool1d.html) | [mindspore.mint.nn.AdaptiveAvgPool1d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool1d.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| -| [torch.nn.AdaptiveAvgPool2d](https://PyTorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool2d.html) | [mindspore.mint.nn.AdaptiveAvgPool2d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool2d.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| -| [torch.nn.AdaptiveAvgPool3d](https://PyTorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool3d.html) | [mindspore.mint.nn.AdaptiveAvgPool3d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool3d.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| -| [torch.nn.AvgPool2d](https://PyTorch.org/docs/2.1/generated/torch.nn.AvgPool2d.html) | [mindspore.mint.nn.AvgPool2d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.AvgPool2d.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | -| [torch.nn.BCELoss](https://PyTorch.org/docs/2.1/generated/torch.nn.BCELoss.html) | [mindspore.mint.nn.BCELoss](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.BCELoss.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | +| [torch.nn.AdaptiveAvgPool1d](https://pytorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool1d.html) | [mindspore.mint.nn.AdaptiveAvgPool1d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool1d.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| +| [torch.nn.AdaptiveAvgPool2d](https://pytorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool2d.html) | [mindspore.mint.nn.AdaptiveAvgPool2d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool2d.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| +| [torch.nn.AdaptiveAvgPool3d](https://pytorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool3d.html) | [mindspore.mint.nn.AdaptiveAvgPool3d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool3d.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions)| +| [torch.nn.AvgPool2d](https://pytorch.org/docs/2.1/generated/torch.nn.AvgPool2d.html) | [mindspore.mint.nn.AvgPool2d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.AvgPool2d.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | +| [torch.nn.BCELoss](https://pytorch.org/docs/2.1/generated/torch.nn.BCELoss.html) | [mindspore.mint.nn.BCELoss](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.BCELoss.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | | [torch.nn.BCEWithLogitsLoss](https://pytorch.org/docs/2.1/generated/torch.nn.BCEWithLogitsLoss.html) | [mindspore.mint.nn.BCEWithLogitsLoss](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.BCEWithLogitsLoss.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | -| [torch.nn.BatchNorm1d](https://PyTorch.org/docs/2.1/generated/torch.nn.BatchNorm1d.html) | [mindspore.mint.nn.BatchNorm1d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.BatchNorm1d.html) | Consistent functions, MindSpore is in inference mode by default. | -| [torch.nn.BatchNorm2d](https://PyTorch.org/docs/2.1/generated/torch.nn.BatchNorm2d.html) | [mindspore.mint.nn.BatchNorm2d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.BatchNorm2d.html) | Consistent functions, MindSpore is in inference mode by default. | -| [torch.nn.BatchNorm3d](https://PyTorch.org/docs/2.1/generated/torch.nn.BatchNorm3d.html) | [mindspore.mint.nn.BatchNorm3d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.BatchNorm3d.html) | Consistent functions, MindSpore is in inference mode by default. | +| [torch.nn.BatchNorm1d](https://pytorch.org/docs/2.1/generated/torch.nn.BatchNorm1d.html) | [mindspore.mint.nn.BatchNorm1d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.BatchNorm1d.html) | Consistent functions, MindSpore is in inference mode by default. | +| [torch.nn.BatchNorm2d](https://pytorch.org/docs/2.1/generated/torch.nn.BatchNorm2d.html) | [mindspore.mint.nn.BatchNorm2d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.BatchNorm2d.html) | Consistent functions, MindSpore is in inference mode by default. | +| [torch.nn.BatchNorm3d](https://pytorch.org/docs/2.1/generated/torch.nn.BatchNorm3d.html) | [mindspore.mint.nn.BatchNorm3d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.BatchNorm3d.html) | Consistent functions, MindSpore is in inference mode by default. | | [torch.nn.ConstantPad1d](https://pytorch.org/docs/2.1/generated/torch.nn.ConstantPad1d.html) | [mindspore.mint.nn.ConstantPad1d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.ConstantPad1d.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | | [torch.nn.ConstantPad2d](https://pytorch.org/docs/2.1/generated/torch.nn.ConstantPad2d.html) | [mindspore.mint.nn.ConstantPad2d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.ConstantPad2d.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | | [torch.nn.ConstantPad3d](https://pytorch.org/docs/2.1/generated/torch.nn.ConstantPad3d.html) | [mindspore.mint.nn.ConstantPad3d](https://www.mindspore.cn/docs/en/master/api_python/mint/mindspore.mint.nn.ConstantPad3d.html) | [Consistent](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_api_mapping.html#api-mapping-consistency-criteria-and-exceptions) | diff --git a/docs/mindspore/source_zh_cn/faq/operators_compile.md b/docs/mindspore/source_zh_cn/faq/operators_compile.md index a54fa17a6b73204bb7ec1438988a61cd5dc1b580..88ef186ab033ab4e1fafb769538a835e2d47e98c 100644 --- a/docs/mindspore/source_zh_cn/faq/operators_compile.md +++ b/docs/mindspore/source_zh_cn/faq/operators_compile.md @@ -7,7 +7,7 @@ A: 这种报错,主要为[ops.concat](https://www.mindspore.cn/docs/zh-CN/master/api_python/ops/mindspore.ops.concat.html)算子提示`shape`过大。建议对`dataset`对象创建迭代器时可设置输出为`numpy`, 如下设置: ```python -gallaryloader.create_dict_iterator(output_numpy=True) +galleryloader.create_dict_iterator(output_numpy=True) ``` 另外在上述后处理环节(非网络计算过程中,即非`construct`函数里面),可以采用`numpy`直接计算,如采用`numpy.concatenate`代替上述`ops.concat`进行计算。 @@ -22,7 +22,7 @@ A: 建议使用[ops.clip_by_value](https://www.mindspore.cn/docs/zh-CN/master/ap ## Q: `TransData`算子的功能是什么,能否优化性能? -A: `TransData`算子出现的场景是: 如果网络中相互连接的算子使用的数据格式不一致(如NC1HWC0),框架就会自动插入`transdata`算子使其转换成一致的数据格式,然后再进行计算。华为Ascend支持5D格式运算,通过`transdata`算子将数据由4D转为5D以提升性能。 +A: `TransData`算子出现的场景是: 如果网络中相互连接的算子使用的数据格式不一致(如NC1HWC0),框架就会自动插入`transdata`算子使其转换成一致的数据格式,然后再进行计算。华为Ascend支持5D格式运算,通过`TransData`算子将数据由4D转为5D以提升性能。
@@ -50,7 +50,7 @@ A: 可以使用mindspore.Tensor.var接口计算Tensor的方差,你可以参考
-## Q: `nn.Embedding`层与PyTorch相比缺少了`Padding`操作,有其余的算子可以实现吗? +## Q: `nn.Embedding`层与PyTorch相比缺少了`Padding`操作,有其他算子可以实现吗? A: 在PyTorch中`padding_idx`的作用是将embedding矩阵中`padding_idx`位置的词向量置为0,并且反向传播时不会更新`padding_idx`位置的词向量。在MindSpore中,可以手动将embedding的`padding_idx`位置对应的权重初始化为0,并且在训练时通过`mask`的操作,过滤掉`padding_idx`位置对应的`Loss`。 @@ -59,11 +59,11 @@ A: 在PyTorch中`padding_idx`的作用是将embedding矩阵中`padding_idx`位 ## Q: Operations中`Tile`算子执行到`__infer__`时`value`值为`None`,丢失了数值是怎么回事? A: `Tile`算子的`multiples input`必须是一个常量(该值不能直接或间接来自于图的输入)。否则构图的时候会拿到一个`None`的数据,因为图的输入是在图执行的时候才传下去的,构图的时候拿不到图的输入数据。 -相关的资料可以看[静态图语法支持](https://www.mindspore.cn/tutorials/zh-CN/master/compile/static_graph.html)。 +相关的资料可参考[静态图语法支持](https://www.mindspore.cn/tutorials/zh-CN/master/compile/static_graph.html)。
-## Q: 使用conv2d算子将卷积核设置为(3,10),Tensor设置为[2,2,10,10],在ModelArts上利用Ascend跑,报错: `FM_W+pad_left+pad_right-KW>=strideW`,而CPU下不报错,怎么回事? +## Q: 使用conv2d算子将卷积核设置为(3,10),Tensor设置为[2,2,10,10],在ModelArts上利用Ascend跑,报错:`FM_W+pad_left+pad_right-KW>=strideW`,而CPU下不报错,怎么回事? A: TBE(Tensor Boost Engine)算子是华为自研的Ascend算子开发工具,在TVM框架基础上扩展,进行自定义算子开发。上述问题是这个TBE算子的限制,x的width必须大于kernel的width。CPU的这个算子没有这个限制,所以不报错。 @@ -71,13 +71,13 @@ A: TBE(Tensor Boost Engine)算子是华为自研的Ascend算子开发工具, ## Q: 请问MindSpore实现了反池化操作了吗?类似于`nn.MaxUnpool2d` 这个反池化操作? -A: 目前 MindSpore 还没有反池化相关的接口。用户可以通过自定义算子的方式自行开发算子,详情请见[自定义算子](https://www.mindspore.cn/tutorials/zh-CN/master/custom_program/op_custom.html)。 +A: 目前 MindSpore 暂无反池化相关的接口。用户可以通过自定义算子的方式自行开发算子,详情请见[自定义算子](https://www.mindspore.cn/tutorials/zh-CN/master/custom_program/op_custom.html)。
## Q: Ascend环境上,一些尽管经过调优工具调试过的算子,性能依旧很差,这时候该怎么办? -A: 遇到这种情况, +A: 解决方案如下: 1. 看一下这些算子是否为融合算子。因为算子预编译可能会改变算子的fusion_type属性,而该属性会影响算子的融合,导致原本不应该融合的小算子融合成了大算子,这些融合出来的大算子性能不一定比小算子性能好。 diff --git a/docs/mindspore/source_zh_cn/faq/performance_tuning.md b/docs/mindspore/source_zh_cn/faq/performance_tuning.md index c4a4a24a5430174918cfdce0f71ff204c6ce4652..c24e22515669be2f86aae2bcc8a88823e7ed7e7e 100644 --- a/docs/mindspore/source_zh_cn/faq/performance_tuning.md +++ b/docs/mindspore/source_zh_cn/faq/performance_tuning.md @@ -2,7 +2,7 @@ [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/mindspore/source_zh_cn/faq/performance_tuning.md) -## Q: MindSpore安装完成,执行训练时发现网络性能异常,权重初始化耗时过长,怎么办? +## Q: MindSpore安装完成,执行训练时发现网络性能异常,权重初始化耗时过长,怎么办? A:可能与环境中使用了`scipy 1.4`系列版本有关,通过`pip list | grep scipy`命令可查看scipy版本,建议改成MindSpore要求的`scipy`版本。版本第三方库依赖可以在`requirement.txt`中查看。 @@ -11,5 +11,4 @@ A:可能与环境中使用了`scipy 1.4`系列版本有关,通过`pip list | ## Q: 在昇腾芯片上进行模型训练时,如何选择batchsize达到最佳性能效果? -A:在昇腾芯片上进行模型训练时,在batchsize等于AI CORE个数或倍数的情况下可以获取更好的训练性能。AI CORE个数可通过链接中的命令行进行查询。 - +A:在昇腾芯片上进行模型训练时,在batch_size等于AI CORE个数或倍数的情况下可以获取更好的训练性能。AI CORE个数可通过[链接](https://support.huawei.com/enterprise/zh/doc/EDOC1100206828/eedfacda)中的命令行进行查询。 diff --git a/docs/mindspore/source_zh_cn/faq/precision_tuning.md b/docs/mindspore/source_zh_cn/faq/precision_tuning.md index fd8ae23c4f2ed6a9bfb233956a73269f9c9596c7..305db727ac0a134960352ba85f8a97eeaea1933f 100644 --- a/docs/mindspore/source_zh_cn/faq/precision_tuning.md +++ b/docs/mindspore/source_zh_cn/faq/precision_tuning.md @@ -4,7 +4,7 @@ ## Q: 导致Loss值不收敛或者精度不达标的原因有哪些呢,应该怎样定位调优? -A: 可能导致Loss值不收敛或者精度问题的原因很多,推荐参考下面总结,逐一排查问题。 +A: 可能导致Loss值不收敛或精度问题的原因很多,推荐参考下面总结,逐一排查问题。 [MindSpore模型精度调优实战(一)精度问题的常见现象、原因和简要调优思路](https://www.hiascend.com/developer/blog/details/0215121673876901029) diff --git a/docs/mindspore/source_zh_cn/note/api_mapping/pytorch_api_mapping.md b/docs/mindspore/source_zh_cn/note/api_mapping/pytorch_api_mapping.md index 9bb0aa45fc89e7f3900ad9be7a1389d56999acdd..6cd8a20e5fc625efb60e009e9abd26b3c6ba8262 100644 --- a/docs/mindspore/source_zh_cn/note/api_mapping/pytorch_api_mapping.md +++ b/docs/mindspore/source_zh_cn/note/api_mapping/pytorch_api_mapping.md @@ -261,7 +261,7 @@ mindspore.mint.argmax只有一种API形式,即mindspore.mint.argmax(input, dim | [torch.distributed.batch_isend_irecv](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.batch_isend_irecv) | [mindspore.mint.distributed.batch_isend_irecv](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.distributed.batch_isend_irecv.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | | [torch.distributed.broadcast](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.broadcast) | [mindspore.mint.distributed.broadcast](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.distributed.broadcast.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| | [torch.distributed.broadcast_object_list](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.broadcast_object_list) | [mindspore.mint.distributed.broadcast_object_list](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.distributed.broadcast_object_list.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| -| []() | [mindspore.mint.distributed.destroy_process_group](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.distributed.destroy_process_group.html) | MindSpore独有| +| - | [mindspore.mint.distributed.destroy_process_group](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.distributed.destroy_process_group.html) | MindSpore独有| | [torch.distributed.gather](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.gather) | [mindspore.mint.distributed.gather](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.distributed.gather.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| | [torch.distributed.gather_object](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.gather_object) | [mindspore.mint.distributed.gather_object](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.distributed.gather_object.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | | [torch.distributed.get_backend](https://pytorch.org/docs/2.1/distributed.html#torch.distributed.get_backend) | [mindspore.mint.distributed.get_backend](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.distributed.get_backend.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | @@ -286,15 +286,15 @@ mindspore.mint.argmax只有一种API形式,即mindspore.mint.argmax(input, dim | PyTorch 2.1 APIs | MindSpore APIs | 说明 | | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -| [torch.nn.AdaptiveAvgPool1d](https://PyTorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool1d.html) | [mindspore.mint.nn.AdaptiveAvgPool1d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool1d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| -| [torch.nn.AdaptiveAvgPool2d](https://PyTorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool2d.html) | [mindspore.mint.nn.AdaptiveAvgPool2d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool2d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| -| [torch.nn.AdaptiveAvgPool3d](https://PyTorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool3d.html) | [mindspore.mint.nn.AdaptiveAvgPool3d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool3d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| -| [torch.nn.AvgPool2d](https://PyTorch.org/docs/2.1/generated/torch.nn.AvgPool2d.html) | [mindspore.mint.nn.AvgPool2d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.AvgPool2d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | -| [torch.nn.BCELoss](https://PyTorch.org/docs/2.1/generated/torch.nn.BCELoss.html) | [mindspore.mint.nn.BCELoss](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.BCELoss.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | +| [torch.nn.AdaptiveAvgPool1d](https://pytorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool1d.html) | [mindspore.mint.nn.AdaptiveAvgPool1d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool1d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| +| [torch.nn.AdaptiveAvgPool2d](https://pytorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool2d.html) | [mindspore.mint.nn.AdaptiveAvgPool2d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool2d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| +| [torch.nn.AdaptiveAvgPool3d](https://pytorch.org/docs/2.1/generated/torch.nn.AdaptiveAvgPool3d.html) | [mindspore.mint.nn.AdaptiveAvgPool3d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.AdaptiveAvgPool3d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景)| +| [torch.nn.AvgPool2d](https://pytorch.org/docs/2.1/generated/torch.nn.AvgPool2d.html) | [mindspore.mint.nn.AvgPool2d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.AvgPool2d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | +| [torch.nn.BCELoss](https://pytorch.org/docs/2.1/generated/torch.nn.BCELoss.html) | [mindspore.mint.nn.BCELoss](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.BCELoss.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | | [torch.nn.BCEWithLogitsLoss](https://pytorch.org/docs/2.1/generated/torch.nn.BCEWithLogitsLoss.html) | [mindspore.mint.nn.BCEWithLogitsLoss](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.BCEWithLogitsLoss.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | -| [torch.nn.BatchNorm1d](https://PyTorch.org/docs/2.1/generated/torch.nn.BatchNorm1d.html) | [mindspore.mint.nn.BatchNorm1d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.BatchNorm1d.html) | 功能一致,MindSpore默认为推理模式 | -| [torch.nn.BatchNorm2d](https://PyTorch.org/docs/2.1/generated/torch.nn.BatchNorm2d.html) | [mindspore.mint.nn.BatchNorm2d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.BatchNorm2d.html) | 功能一致,MindSpore默认为推理模式 | -| [torch.nn.BatchNorm3d](https://PyTorch.org/docs/2.1/generated/torch.nn.BatchNorm3d.html) | [mindspore.mint.nn.BatchNorm3d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.BatchNorm3d.html) | 功能一致,MindSpore默认为推理模式 | +| [torch.nn.BatchNorm1d](https://pytorch.org/docs/2.1/generated/torch.nn.BatchNorm1d.html) | [mindspore.mint.nn.BatchNorm1d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.BatchNorm1d.html) | 功能一致,MindSpore默认为推理模式 | +| [torch.nn.BatchNorm2d](https://pytorch.org/docs/2.1/generated/torch.nn.BatchNorm2d.html) | [mindspore.mint.nn.BatchNorm2d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.BatchNorm2d.html) | 功能一致,MindSpore默认为推理模式 | +| [torch.nn.BatchNorm3d](https://pytorch.org/docs/2.1/generated/torch.nn.BatchNorm3d.html) | [mindspore.mint.nn.BatchNorm3d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.BatchNorm3d.html) | 功能一致,MindSpore默认为推理模式 | | [torch.nn.ConstantPad1d](https://pytorch.org/docs/2.1/generated/torch.nn.ConstantPad1d.html) | [mindspore.mint.nn.ConstantPad1d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.ConstantPad1d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | | [torch.nn.ConstantPad2d](https://pytorch.org/docs/2.1/generated/torch.nn.ConstantPad2d.html) | [mindspore.mint.nn.ConstantPad2d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.ConstantPad2d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) | | [torch.nn.ConstantPad3d](https://pytorch.org/docs/2.1/generated/torch.nn.ConstantPad3d.html) | [mindspore.mint.nn.ConstantPad3d](https://www.mindspore.cn/docs/zh-CN/master/api_python/mint/mindspore.mint.nn.ConstantPad3d.html) | [一致](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html#api映射一致标准及例外场景) |