From 9c671130309606b698bb4221301de6e2efa3f456 Mon Sep 17 00:00:00 2001 From: huan <3174348550@qq.com> Date: Tue, 19 Aug 2025 17:12:03 +0800 Subject: [PATCH] add api update files --- resource/api_updates/func_api_updates_cn.md | 20 +++++++++++- resource/api_updates/func_api_updates_en.md | 20 +++++++++++- resource/api_updates/mint_api_updates_cn.md | 36 ++++++++++++++++++++- resource/api_updates/mint_api_updates_en.md | 36 ++++++++++++++++++++- resource/api_updates/nn_api_updates_cn.md | 5 +-- resource/api_updates/nn_api_updates_en.md | 5 +-- resource/api_updates/ops_api_updates_cn.md | 15 ++++++++- resource/api_updates/ops_api_updates_en.md | 15 ++++++++- 8 files changed, 138 insertions(+), 14 deletions(-) diff --git a/resource/api_updates/func_api_updates_cn.md b/resource/api_updates/func_api_updates_cn.md index d1a23e7edf..96679ba1fe 100644 --- a/resource/api_updates/func_api_updates_cn.md +++ b/resource/api_updates/func_api_updates_cn.md @@ -1,6 +1,24 @@ # mindspore.ops API接口变更 -与上一版本相比,MindSpore中`mindspore.ops`API接口的添加、删除和支持平台的更改信息如下表所示。 +2.7.0版本与2.6.0版本相比,MindSpore中`mindspore.ops`API接口的添加、删除和支持平台的更改信息如下表所示。 |API|变更状态|概述|支持平台|类别| |:----|:----|:----|:----|:----| +[mindspore.ops.ring_attention_update](https://mindspore.cn/docs/zh-CN/r2.7.0/api_python/ops/mindspore.ops.ring_attention_update.html#mindspore.ops.ring_attention_update)|New|RingAttentionUpdate算子功能是将两次FlashAttention的输出根据其不同的softmax的max和sum更新。|r2.7.0: Ascend|神经网络 + +2.6.0版本与2.5.0版本相比,MindSpore中`mindspore.ops`API接口的添加、删除和支持平台的更改信息如下表所示。 + +|API|变更状态|概述|支持平台|类别 +|:----|:----|:----|:----|:---- +[mindspore.ops.reverse](https://mindspore.cn/docs/zh-CN/r2.5.0/api_python/ops/mindspore.ops.reverse.html#mindspore.ops.reverse)|Deleted|此接口将在未来版本弃用,请使用 mindspore.ops.flip() 代替。||Array操作 +[mindspore.ops.roll](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/ops/mindspore.ops.roll.html#mindspore.ops.roll)|Changed|r2.5.0: 沿轴移动Tensor的元素。 => r2.6.0: 按维度移动tensor的元素。|r2.5.0: GPU => r2.6.0: Ascend/GPU|Array操作 +[mindspore.ops.unique_with_pad](https://mindspore.cn/docs/zh-CN/r2.5.0/api_python/ops/mindspore.ops.unique_with_pad.html#mindspore.ops.unique_with_pad)|Deleted|对输入一维Tensor中元素去重,返回一维Tensor中的唯一元素(使用pad_num填充)和相对索引。||Array操作 +[mindspore.ops.move_to](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/ops/mindspore.ops.move_to.html#mindspore.ops.move_to)|New|拷贝tensor到目标设备,包含同步和异步两种方式,默认是同步方式。|r2.6.0: Ascend/CPU|Tensor创建 +[mindspore.ops.fused_infer_attention_score](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/ops/mindspore.ops.fused_infer_attention_score.html#mindspore.ops.fused_infer_attention_score)|New|这是一个适配增量和全量推理场景的FlashAttention函数,既可以支持全量计算场景(PromptFlashAttention),也可支持增量计算场景(IncreFlashAttention)。|r2.6.0: Ascend|神经网络 +[mindspore.ops.moe_token_permute](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/ops/mindspore.ops.moe_token_permute.html#mindspore.ops.moe_token_permute)|New|根据 indices 对 tokens 进行排列。|r2.6.0: Ascend|神经网络 +[mindspore.ops.moe_token_unpermute](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/ops/mindspore.ops.moe_token_unpermute.html#mindspore.ops.moe_token_unpermute)|New|根据排序的索引对已排列的标记进行反排列,并可选择将标记与其对应的概率合并。|r2.6.0: Ascend|神经网络 +[mindspore.ops.speed_fusion_attention](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/ops/mindspore.ops.speed_fusion_attention.html#mindspore.ops.speed_fusion_attention)|New|本接口用于实现self-attention的融合计算。|r2.6.0: Ascend|神经网络 +[mindspore.ops.scalar_cast](https://mindspore.cn/docs/zh-CN/r2.5.0/api_python/ops/mindspore.ops.scalar_cast.html#mindspore.ops.scalar_cast)|Deleted|该接口从2.3版本开始已被弃用,并将在未来版本中被移除,建议使用 int(x) 或 float(x) 代替。||类型转换 +[mindspore.ops.svd](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/ops/mindspore.ops.svd.html#mindspore.ops.svd)|Changed|计算单个或多个矩阵的奇异值分解。|r2.5.0: GPU/CPU => r2.6.0: Ascend/GPU/CPU|线性代数函数 +[mindspore.ops.bessel_i0e](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/ops/mindspore.ops.bessel_i0e.html#mindspore.ops.bessel_i0e)|Changed|r2.5.0: 逐元素计算指数缩放第一类零阶修正贝塞尔函数。 => r2.6.0: 逐元素计算输入tensor的指数缩放第一类零阶修正贝塞尔函数值。|r2.5.0: Ascend/GPU/CPU => r2.6.0: GPU/CPU|逐元素运算 +[mindspore.ops.bessel_i1e](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/ops/mindspore.ops.bessel_i1e.html#mindspore.ops.bessel_i1e)|Changed|r2.5.0: 逐元素计算指数缩放第一类一阶修正Bessel函数。 => r2.6.0: 逐元素计算输入tensor的指数缩放第一类一阶修正贝塞尔函数值。|r2.5.0: Ascend/GPU/CPU => r2.6.0: GPU/CPU|逐元素运算 diff --git a/resource/api_updates/func_api_updates_en.md b/resource/api_updates/func_api_updates_en.md index 80481cda88..319d231af6 100644 --- a/resource/api_updates/func_api_updates_en.md +++ b/resource/api_updates/func_api_updates_en.md @@ -1,6 +1,24 @@ # mindspore.ops API Interface Change -Compared with the previous version, the added, deleted and supported platforms change information of `mindspore.ops` operators in MindSpore, is shown in the following table. +Compared with the version 2.6.0, the added, deleted and supported platforms change information of `mindspore.ops` operators in version 2.7.0, is shown in the following table. |API|Status|Description|Support Platform|Class |:----|:----|:----|:----|:---- +[mindspore.ops.ring_attention_update](https://mindspore.cn/docs/en/r2.7.0/api_python/ops/mindspore.ops.ring_attention_update.html#mindspore.ops.ring_attention_update)|New|The RingAttentionUpdate operator updates the output of two FlashAttention operations based on their respective softmax max and softmax sum values.|r2.7.0: Ascend|Neural Network + +Compared with the version 2.5.0, the added, deleted and supported platforms change information of `mindspore.ops` operators in version 2.6.0, is shown in the following table. + +|API|Status|Description|Support Platform|Class +|:----|:----|:----|:----|:---- +[mindspore.ops.reverse](https://mindspore.cn/docs/en/r2.5.0/api_python/ops/mindspore.ops.reverse.html#mindspore.ops.reverse)|Deleted|This interface will be deprecated in the future, and use mindspore.ops.flip() instead.||Array Operation +[mindspore.ops.roll](https://mindspore.cn/docs/en/r2.6.0/api_python/ops/mindspore.ops.roll.html#mindspore.ops.roll)|Changed|r2.5.0: Rolls the elements of a tensor along an axis. => r2.6.0: Roll the elements of a tensor along a dimension.|r2.5.0: GPU => r2.6.0: Ascend/GPU|Array Operation +[mindspore.ops.unique_with_pad](https://mindspore.cn/docs/en/r2.5.0/api_python/ops/mindspore.ops.unique_with_pad.html#mindspore.ops.unique_with_pad)|Deleted|Returns unique elements and relative indexes in 1-D tensor, filled with padding num.||Array Operation +[mindspore.ops.bessel_i0e](https://mindspore.cn/docs/en/r2.6.0/api_python/ops/mindspore.ops.bessel_i0e.html#mindspore.ops.bessel_i0e)|Changed|r2.5.0: Computes exponential scaled modified Bessel function of the first kind, order 0 element-wise. => r2.6.0: Computes the exponentially scaled zeroth order modified Bessel function of the first kind for each element input.|r2.5.0: Ascend/GPU/CPU => r2.6.0: GPU/CPU|Element-wise Operations +[mindspore.ops.bessel_i1e](https://mindspore.cn/docs/en/r2.6.0/api_python/ops/mindspore.ops.bessel_i1e.html#mindspore.ops.bessel_i1e)|Changed|r2.5.0: Computes exponential scaled modified Bessel function of the first kind, order 1 element-wise. => r2.6.0: Computes the exponentially scaled first order modified Bessel function of the first kind for each element input.|r2.5.0: Ascend/GPU/CPU => r2.6.0: GPU/CPU|Element-wise Operations +[mindspore.ops.svd](https://mindspore.cn/docs/en/r2.6.0/api_python/ops/mindspore.ops.svd.html#mindspore.ops.svd)|Changed|Computes the singular value decompositions of one or more matrices.|r2.5.0: GPU/CPU => r2.6.0: Ascend/GPU/CPU|Linear Algebraic Functions +[mindspore.ops.fused_infer_attention_score](https://mindspore.cn/docs/en/r2.6.0/api_python/ops/mindspore.ops.fused_infer_attention_score.html#mindspore.ops.fused_infer_attention_score)|New|This is a FlashAttention function designed for both incremental and full inference scenarios.|r2.6.0: Ascend|Neural Network +[mindspore.ops.moe_token_permute](https://mindspore.cn/docs/en/r2.6.0/api_python/ops/mindspore.ops.moe_token_permute.html#mindspore.ops.moe_token_permute)|New|Permute the tokens based on the indices.|r2.6.0: Ascend|Neural Network +[mindspore.ops.moe_token_unpermute](https://mindspore.cn/docs/en/r2.6.0/api_python/ops/mindspore.ops.moe_token_unpermute.html#mindspore.ops.moe_token_unpermute)|New|Unpermute a tensor of permuted tokens based on sorted indices, and optionally merge the tokens with their corresponding probabilities.|r2.6.0: Ascend|Neural Network +[mindspore.ops.speed_fusion_attention](https://mindspore.cn/docs/en/r2.6.0/api_python/ops/mindspore.ops.speed_fusion_attention.html#mindspore.ops.speed_fusion_attention)|New|The interface is used for self-attention fusion computing.|r2.6.0: Ascend|Neural Network +[mindspore.ops.move_to](https://mindspore.cn/docs/en/r2.6.0/api_python/ops/mindspore.ops.move_to.html#mindspore.ops.move_to)|New|Copy tensor to target device synchronously or asynchronously, default synchronously.|r2.6.0: Ascend/CPU|Tensor Creation +[mindspore.ops.scalar_cast](https://mindspore.cn/docs/en/r2.5.0/api_python/ops/mindspore.ops.scalar_cast.html#mindspore.ops.scalar_cast)|Deleted|The interface is deprecated from version 2.3 and will be removed in a future version, please use int(x) or float(x) instead.||Type Cast \ No newline at end of file diff --git a/resource/api_updates/mint_api_updates_cn.md b/resource/api_updates/mint_api_updates_cn.md index 204722ae98..3a5c43eac1 100644 --- a/resource/api_updates/mint_api_updates_cn.md +++ b/resource/api_updates/mint_api_updates_cn.md @@ -1,6 +1,40 @@ # mindspore.mint API接口变更 -与上一版本相比,MindSpore中`mindspore.mint`API接口的添加、删除和支持平台的更改信息如下表所示。 +2.7.0版本与2.6.0版本相比,MindSpore中`mindspore.mint`API接口的添加、删除和支持平台的更改信息如下表所示。 |API|变更状态|概述|支持平台|类别| |:----|:----|:----|:----|:----| +[mindspore.mint.distributed.TCPStore](https://mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.distributed.TCPStore.html#mindspore.mint.distributed.TCPStore)|New|一种基于传输控制协议(TCP)的分布式键值存储实现方法。|r2.7.0: Ascend|mindspore.mint.distributed +[mindspore.mint.distributed.all_gather_into_tensor_uneven](https://mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.distributed.all_gather_into_tensor_uneven.html#mindspore.mint.distributed.all_gather_into_tensor_uneven)|New|收集并拼接各设备上的张量,各设备上的张量第一维可以不一致。|r2.7.0: Ascend|mindspore.mint.distributed +[mindspore.mint.distributed.reduce_scatter_tensor_uneven](https://mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.distributed.reduce_scatter_tensor_uneven.html#mindspore.mint.distributed.reduce_scatter_tensor_uneven)|New|在指定通信组中执行归约分发操作,根据 input_split_sizes 将归约后的张量分散到各rank的输出张量中。|r2.7.0: Ascend|mindspore.mint.distributed +[mindspore.mint.floor_divide](https://mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.floor_divide.html#mindspore.mint.floor_divide)|New|按元素将第一个输入Tensor除以第二个输入Tensor,并向下取整。|r2.7.0: Ascend|逐元素运算 +[mindspore.mint.nn.functional.threshold](https://mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.functional.threshold.html#mindspore.mint.nn.functional.threshold)|New|逐元素计算Threshold激活函数。|r2.7.0: Ascend|非线性激活函数 +[mindspore.mint.nn.functional.threshold_](https://mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.functional.threshold_.html#mindspore.mint.nn.functional.threshold_)|New|通过逐元素计算 Threshold 激活函数,原地更新 input Tensor。|r2.7.0: Ascend|非线性激活函数 +[mindspore.mint.nn.Threshold](https://mindspore.cn/docs/zh-CN/r2.7.0/api_python/mint/mindspore.mint.nn.Threshold.html#mindspore.mint.nn.Threshold)|New|逐元素计算Threshold激活函数。|r2.7.0: Ascend|非线性激活层 (加权和,非线性) + +2.6.0版本与2.5.0版本相比,MindSpore中`mindspore.mint`API接口的添加、删除和支持平台的更改信息如下表所示。 + +|API|变更状态|概述|支持平台|类别 +|:----|:----|:----|:----|:---- +[mindspore.mint.nn.functional.pixel_shuffle](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.nn.functional.pixel_shuffle.html#mindspore.mint.nn.functional.pixel_shuffle)|New|根据上采样系数重排Tensor中的元素。|r2.6.0: Ascend|Vision函数 +[mindspore.mint.nn.PixelShuffle](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.nn.PixelShuffle.html#mindspore.mint.nn.PixelShuffle)|New|根据上采样系数重新排列Tensor中的元素。|r2.6.0: Ascend|Vision层 +[mindspore.mint.distributed.is_available](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.distributed.is_available.html#mindspore.mint.distributed.is_available)|New|分布式模块是否可用。|r2.6.0: Ascend|mindspore.mint.distributed +[mindspore.mint.distributed.is_initialized](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.distributed.is_initialized.html#mindspore.mint.distributed.is_initialized)|New|默认的通信组是否初始化。|r2.6.0: Ascend|mindspore.mint.distributed +[mindspore.mint.optim.SGD](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.optim.SGD.html#mindspore.mint.optim.SGD)|New|随机梯度下降算法。|r2.6.0: Ascend|mindspore.mint.optim +[mindspore.mint.diag](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.diag.html#mindspore.mint.diag)|New|如果 input 是向量(1-D 张量),则返回一个二维张量,其中 input 的元素作为对角线。|r2.6.0: Ascend|其他运算 +[mindspore.mint.triangular_solve](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.triangular_solve.html#mindspore.mint.triangular_solve)|New|求解正上三角形或下三角形可逆矩阵 A 和包含多个元素的右侧边 b 的方程组的解。|r2.6.0: Ascend|其他运算 +[mindspore.mint.nn.KLDivLoss](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.nn.KLDivLoss.html#mindspore.mint.nn.KLDivLoss)|New|计算输入 input 和 target 的Kullback-Leibler散度。|r2.6.0: Ascend|损失函数 +[mindspore.mint.nn.functional.cross_entropy](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.nn.functional.cross_entropy.html#mindspore.mint.nn.functional.cross_entropy)|New|获取预测值和目标值之间的交叉熵损失。|r2.6.0: Ascend|损失函数 +[mindspore.mint.nn.functional.kl_div](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.nn.functional.kl_div.html#mindspore.mint.nn.functional.kl_div)|New|计算输入 input 和 target 的Kullback-Leibler散度。|r2.6.0: Ascend|损失函数 +[mindspore.mint.nn.functional.adaptive_avg_pool3d](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.nn.functional.adaptive_avg_pool3d.html#mindspore.mint.nn.functional.adaptive_avg_pool3d)|New|对一个多平面输入信号执行三维自适应平均池化。|r2.6.0: Ascend|池化函数 +[mindspore.mint.nn.functional.adaptive_max_pool1d](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.nn.functional.adaptive_max_pool1d.html#mindspore.mint.nn.functional.adaptive_max_pool1d)|New|对一个多平面输入信号执行一维自适应最大池化。|r2.6.0: Ascend|池化函数 +[mindspore.mint.nn.functional.avg_pool3d](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.nn.functional.avg_pool3d.html#mindspore.mint.nn.functional.avg_pool3d)|New|在输入Tensor上应用3d平均池化,输入Tensor可以看作是由一系列3d平面组成的。|r2.6.0: Ascend|池化函数 +[mindspore.mint.nn.AdaptiveMaxPool1d](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.nn.AdaptiveMaxPool1d.html#mindspore.mint.nn.AdaptiveMaxPool1d)|New|对由多个输入平面组成的输入信号应用1D自适应最大池化。|r2.6.0: Ascend|池化层 +[mindspore.mint.nn.AvgPool3d](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.nn.AvgPool3d.html#mindspore.mint.nn.AvgPool3d)|New|对输入张量应用三维平均池化,可视为三维输入平面的组合。|r2.6.0: Ascend|池化层 +[mindspore.mint.index_add](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.index_add.html#mindspore.mint.index_add)|New|根据 index 中的索引顺序,将 alpha 乘以 source 的元素累加到 input 中。|r2.6.0: Ascend|索引、切分、连接、突变运算 +[mindspore.mint.linalg.qr](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.linalg.qr.html#mindspore.mint.linalg.qr)|New|对输入矩阵进行正交分解: $(A = QR)$。|r2.6.0: Ascend|逆数 +[mindspore.mint.logaddexp2](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.logaddexp2.html#mindspore.mint.logaddexp2)|New|计算以2为底的输入的指数和的对数。|r2.6.0: Ascend|逐元素运算 +[mindspore.mint.nn.functional.elu_](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.nn.functional.elu_.html#mindspore.mint.nn.functional.elu_)|New|指数线性单元激活函数。|r2.6.0: Ascend|非线性激活函数 +[mindspore.mint.nn.functional.glu](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.nn.functional.glu.html#mindspore.mint.nn.functional.glu)|New|计算输入Tensor的门线性单元激活函数(Gated Linear Unit activation function)值。|r2.6.0: Ascend|非线性激活函数 +[mindspore.mint.nn.GLU](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.nn.GLU.html#mindspore.mint.nn.GLU)|New|计算输入Tensor的门线性单元激活函数(Gated Linear Unit activation function)值。|r2.6.0: Ascend|非线性激活层 (加权和,非线性) +[mindspore.mint.nn.Sigmoid](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/mint/mindspore.mint.nn.Sigmoid.html#mindspore.mint.nn.Sigmoid)|New|逐元素计算Sigmoid激活函数。|r2.6.0: Ascend|非线性激活层 (加权和,非线性) diff --git a/resource/api_updates/mint_api_updates_en.md b/resource/api_updates/mint_api_updates_en.md index c26d6af8d6..fcf771525c 100644 --- a/resource/api_updates/mint_api_updates_en.md +++ b/resource/api_updates/mint_api_updates_en.md @@ -1,6 +1,40 @@ # mindspore.mint API Interface Change -Compared with the previous version, the added, deleted and supported platforms change information of `mindspore.mint` operators in MindSpore, is shown in the following table. +Compared with the version 2.6.0, the added, deleted and supported platforms change information of `mindspore.mint` operators in version 2.7.0, is shown in the following table. |API|Status|Description|Support Platform|Class |:----|:----|:----|:----|:---- +[mindspore.mint.nn.Threshold](https://mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.Threshold.html#mindspore.mint.nn.Threshold)|New|Compute the Threshold activation function element-wise.|r2.7.0: Ascend|Non-linear Activations (weighted sum, nonlinearity) +[mindspore.mint.nn.functional.threshold](https://mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.functional.threshold.html#mindspore.mint.nn.functional.threshold)|New|Compute the Threshold activation function element-wise.|r2.7.0: Ascend|Non-linear activation functions +[mindspore.mint.nn.functional.threshold_](https://mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.nn.functional.threshold_.html#mindspore.mint.nn.functional.threshold_)|New|Update the input tensor in-place by computing the Threshold activation function element-wise.|r2.7.0: Ascend|Non-linear activation functions +[mindspore.mint.floor_divide](https://mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.floor_divide.html#mindspore.mint.floor_divide)|New|Divides the first input tensor by the second input tensor element-wise and round down to the closest integer.|r2.7.0: Ascend|Pointwise Operations +[mindspore.mint.distributed.TCPStore](https://mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.distributed.TCPStore.html#mindspore.mint.distributed.TCPStore)|New|A TCP-based distributed key-value store implementation.|r2.7.0: Ascend|mindspore.mint.distributed +[mindspore.mint.distributed.all_gather_into_tensor_uneven](https://mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.distributed.all_gather_into_tensor_uneven.html#mindspore.mint.distributed.all_gather_into_tensor_uneven)|New|Gathers and concatenates tensors across devices with uneven first dimensions.|r2.7.0: Ascend|mindspore.mint.distributed +[mindspore.mint.distributed.reduce_scatter_tensor_uneven](https://mindspore.cn/docs/en/r2.7.0/api_python/mint/mindspore.mint.distributed.reduce_scatter_tensor_uneven.html#mindspore.mint.distributed.reduce_scatter_tensor_uneven)|New|Reduce tensors from the specified communication group and scatter to the output tensor according to input_split_sizes.|r2.7.0: Ascend|mindspore.mint.distributed + +Compared with the previous version 2.5.0, the added, deleted and supported platforms change information of `mindspore.mint` operators in version 2.6.0, is shown in the following table. + +|API|Status|Description|Support Platform|Class +|:----|:----|:----|:----|:---- +[mindspore.mint.index_add](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.index_add.html#mindspore.mint.index_add)|New|Accumulate the elements of alpha times source into the input by adding to the index in the order given in index.|r2.6.0: Ascend|Indexing, Slicing, Joining, Mutating Operations +[mindspore.mint.linalg.qr](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.linalg.qr.html#mindspore.mint.linalg.qr)|New|Orthogonal decomposition of the input $(A = QR)$.|r2.6.0: Ascend|Inverses +[mindspore.mint.nn.KLDivLoss](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.nn.KLDivLoss.html#mindspore.mint.nn.KLDivLoss)|New|Computes the Kullback-Leibler divergence between the input and the target.|r2.6.0: Ascend|Loss Functions +[mindspore.mint.nn.functional.cross_entropy](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.nn.functional.cross_entropy.html#mindspore.mint.nn.functional.cross_entropy)|New|The cross entropy loss between input and target.|r2.6.0: Ascend|Loss Functions +[mindspore.mint.nn.functional.kl_div](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.nn.functional.kl_div.html#mindspore.mint.nn.functional.kl_div)|New|Computes the Kullback-Leibler divergence between the input and the target.|r2.6.0: Ascend|Loss Functions +[mindspore.mint.nn.GLU](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.nn.GLU.html#mindspore.mint.nn.GLU)|New|Computes GLU (Gated Linear Unit activation function) of the input tensor.|r2.6.0: Ascend|Non-linear Activations (weighted sum, nonlinearity) +[mindspore.mint.nn.Sigmoid](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.nn.Sigmoid.html#mindspore.mint.nn.Sigmoid)|New|Applies sigmoid activation function element-wise.|r2.6.0: Ascend|Non-linear Activations (weighted sum, nonlinearity) +[mindspore.mint.nn.functional.elu_](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.nn.functional.elu_.html#mindspore.mint.nn.functional.elu_)|New|Exponential Linear Unit activation function|r2.6.0: Ascend|Non-linear activation functions +[mindspore.mint.nn.functional.glu](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.nn.functional.glu.html#mindspore.mint.nn.functional.glu)|New|Computes GLU (Gated Linear Unit activation function) of the input tensor.|r2.6.0: Ascend|Non-linear activation functions +[mindspore.mint.diag](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.diag.html#mindspore.mint.diag)|New|If input is a vector (1-D tensor), then returns a 2-D square tensor with the elements of input as the diagonal.|r2.6.0: Ascend|Other Operations +[mindspore.mint.triangular_solve](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.triangular_solve.html#mindspore.mint.triangular_solve)|New|Solves a system of equations with a square upper or lower triangular invertible matrix A and multiple right-hand sides b.|r2.6.0: Ascend|Other Operations +[mindspore.mint.logaddexp2](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.logaddexp2.html#mindspore.mint.logaddexp2)|New|Logarithm of the sum of exponentiations of the inputs in base of 2.|r2.6.0: Ascend|Pointwise Operations +[mindspore.mint.nn.AdaptiveMaxPool1d](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.nn.AdaptiveMaxPool1d.html#mindspore.mint.nn.AdaptiveMaxPool1d)|New|Applies a 1D adaptive max pooling over an input signal composed of several input planes.|r2.6.0: Ascend|Pooling Layers +[mindspore.mint.nn.AvgPool3d](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.nn.AvgPool3d.html#mindspore.mint.nn.AvgPool3d)|New|Applies a 3D average pooling over an input Tensor which can be regarded as a composition of 3D input planes.|r2.6.0: Ascend|Pooling Layers +[mindspore.mint.nn.functional.adaptive_avg_pool3d](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.nn.functional.adaptive_avg_pool3d.html#mindspore.mint.nn.functional.adaptive_avg_pool3d)|New|Performs 3D adaptive average pooling on a multi-plane input signal.|r2.6.0: Ascend|Pooling functions +[mindspore.mint.nn.functional.adaptive_max_pool1d](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.nn.functional.adaptive_max_pool1d.html#mindspore.mint.nn.functional.adaptive_max_pool1d)|New|Performs 1D adaptive max pooling on a multi-plane input signal.|r2.6.0: Ascend|Pooling functions +[mindspore.mint.nn.functional.avg_pool3d](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.nn.functional.avg_pool3d.html#mindspore.mint.nn.functional.avg_pool3d)|New|Applies a 3D average pooling over an input Tensor which can be regarded as a composition of 3D input planes.|r2.6.0: Ascend|Pooling functions +[mindspore.mint.nn.PixelShuffle](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.nn.PixelShuffle.html#mindspore.mint.nn.PixelShuffle)|New|Rearrange elements in a tensor according to an upscaling factor.|r2.6.0: Ascend|Vision Layer +[mindspore.mint.nn.functional.pixel_shuffle](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.nn.functional.pixel_shuffle.html#mindspore.mint.nn.functional.pixel_shuffle)|New|Rearrange elements in a tensor according to an upscaling factor.|r2.6.0: Ascend|Vision functions +[mindspore.mint.distributed.is_available](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.distributed.is_available.html#mindspore.mint.distributed.is_available)|New|Checks if distributed module is available.|r2.6.0: Ascend|mindspore.mint.distributed +[mindspore.mint.distributed.is_initialized](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.distributed.is_initialized.html#mindspore.mint.distributed.is_initialized)|New|Checks if default process group has been initialized.|r2.6.0: Ascend|mindspore.mint.distributed +[mindspore.mint.optim.SGD](https://mindspore.cn/docs/en/r2.6.0/api_python/mint/mindspore.mint.optim.SGD.html#mindspore.mint.optim.SGD)|New|Stochastic Gradient Descent optimizer.|r2.6.0: Ascend|mindspore.mint.optim diff --git a/resource/api_updates/nn_api_updates_cn.md b/resource/api_updates/nn_api_updates_cn.md index 917a1870ad..d677dad69d 100644 --- a/resource/api_updates/nn_api_updates_cn.md +++ b/resource/api_updates/nn_api_updates_cn.md @@ -1,6 +1,3 @@ # mindspore.nn API接口变更 -与上一版本相比,MindSpore中`mindspore.nn`API接口的添加、删除和支持平台的更改信息如下表所示。 - -|API|变更状态|概述|支持平台|类别 -|:----|:----|:----|:----|:---- +与上一版本2.6.0相比,MindSpore中 `mindspore.nn` API接口没有变化。 diff --git a/resource/api_updates/nn_api_updates_en.md b/resource/api_updates/nn_api_updates_en.md index c8922d2324..b6a5969d63 100644 --- a/resource/api_updates/nn_api_updates_en.md +++ b/resource/api_updates/nn_api_updates_en.md @@ -1,6 +1,3 @@ # mindspore.nn API Interface Change -Compared with the previous version, the added, deleted and supported platforms change information of `mindspore.nn` operators in MindSpore, is shown in the following table. - -|API|Status|Description|Support Platform|Class -|:----|:----|:----|:----|:---- +Compared with the version 2.6.0, the information of `mindspore.nn` operators in MindSpore has no changes. \ No newline at end of file diff --git a/resource/api_updates/ops_api_updates_cn.md b/resource/api_updates/ops_api_updates_cn.md index 9aed66141d..0e6bb73648 100644 --- a/resource/api_updates/ops_api_updates_cn.md +++ b/resource/api_updates/ops_api_updates_cn.md @@ -1,6 +1,19 @@ # mindspore.ops.primitive API接口变更 -与上一版本相比,MindSpore中`mindspore.ops.primitive`API接口的添加、删除和支持平台的更改信息如下表所示。 +2.7.0版本与2.6.0版本相比,MindSpore中`mindspore.ops.primitive`API接口的添加、删除和支持平台的更改信息如下表所示。 |API|变更状态|概述|支持平台|类别 |:----|:----|:----|:----|:---- +[mindspore.ops.AllGatherV](https://mindspore.cn/docs/zh-CN/r2.7.0/api_python/ops/mindspore.ops.AllGatherV.html#mindspore.ops.AllGatherV)|New|从指定的通信组中收集不均匀的张量,并返回全部收集的张量。|r2.7.0: Ascend/GPU|通信算子 +[mindspore.ops.ReduceScatterV](https://mindspore.cn/docs/zh-CN/r2.7.0/api_python/ops/mindspore.ops.ReduceScatterV.html#mindspore.ops.ReduceScatterV)|New|规约并且分发指定通信组中不均匀的张量,返回分发后的张量。|r2.7.0: Ascend/GPU|通信算子 + +2.6.0版本与2.5.0版本相比,MindSpore中`mindspore.ops.primitive`API接口的添加、删除和支持平台的更改信息如下表所示。 + +|API|变更状态|概述|支持平台|类别 +|:----|:----|:----|:----|:---- +[mindspore.ops.Morph](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/ops/mindspore.ops.Morph.html#mindspore.ops.Morph)|New|Morph 算子用于对用户自定义函数 fn 进行封装,允许其被当做自定义算子使用。|r2.6.0: |框架算子 +[mindspore.ops.Svd](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/ops/mindspore.ops.Svd.html#mindspore.ops.Svd)|Changed|计算一个或多个矩阵的奇异值分解。|r2.5.0: GPU/CPU => r2.6.0: Ascend/GPU/CPU|线性代数算子 +[mindspore.ops.CustomOpBuilder](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/ops/mindspore.ops.CustomOpBuilder.html#mindspore.ops.CustomOpBuilder)|New|CustomOpBuilder 用于初始化和配置MindSpore的自定义算子。|r2.6.0: Ascend/CPU|自定义算子 +[mindspore.ops.custom_info_register](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/ops/mindspore.ops.custom_info_register.html#mindspore.ops.custom_info_register)|Changed|装饰器,用于将注册信息绑定到: mindspore.ops.Custom 的 func 参数。|r2.5.0: => r2.6.0: Ascend/GPU/CPU|自定义算子 +[mindspore.ops.kernel](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/ops/mindspore.ops.kernel.html#mindspore.ops.kernel)|Changed|用于MindSpore Hybrid DSL函数书写的装饰器。|r2.5.0: Ascend/GPU/CPU => r2.6.0: GPU/CPU|自定义算子 +[mindspore.ops.AlltoAllV](https://mindspore.cn/docs/zh-CN/r2.6.0/api_python/ops/mindspore.ops.AlltoAllV.html#mindspore.ops.AlltoAllV)|New|相对AlltoAll来说,AlltoAllV算子支持不等分的切分和聚合。|r2.6.0: Ascend|通信算子 \ No newline at end of file diff --git a/resource/api_updates/ops_api_updates_en.md b/resource/api_updates/ops_api_updates_en.md index abe25f52b6..5733f68702 100644 --- a/resource/api_updates/ops_api_updates_en.md +++ b/resource/api_updates/ops_api_updates_en.md @@ -1,6 +1,19 @@ # mindspore.ops.primitive API Interface Change -Compared with the previous version, the added, deleted and supported platforms change information of `mindspore.ops.primitive` operators in MindSpore, is shown in the following table. +Compared with the version 2.6.0, the added, deleted and supported platforms change information of `mindspore.ops.primitive` operators in version 2.7.0, is shown in the following table. |API|Status|Description|Support Platform|Class |:----|:----|:----|:----|:---- +[mindspore.ops.AllGatherV](https://mindspore.cn/docs/en/r2.7.0/api_python/ops/mindspore.ops.AllGatherV.html#mindspore.ops.AllGatherV)|New|Gathers uneven tensors from the specified communication group and returns the tensor which is all gathered.|r2.7.0: Ascend/GPU|Communication Operator +[mindspore.ops.ReduceScatterV](https://mindspore.cn/docs/en/r2.7.0/api_python/ops/mindspore.ops.ReduceScatterV.html#mindspore.ops.ReduceScatterV)|New|Reduces and scatters uneven tensors from the specified communication group and returns the tensor which is reduced and scattered.|r2.7.0: Ascend/GPU|Communication Operator + +Compared with the version 2.5.0, the added, deleted and supported platforms change information of `mindspore.ops.primitive` operators in version 2.6.0, is shown in the following table. + +|API|Status|Description|Support Platform|Class +|:----|:----|:----|:----|:---- +[mindspore.ops.AlltoAllV](https://mindspore.cn/docs/en/r2.6.0/api_python/ops/mindspore.ops.AlltoAllV.html#mindspore.ops.AlltoAllV)|New|AllToAllV which support uneven scatter and gather compared with AllToAll.|r2.6.0: Ascend|Communication Operator +[mindspore.ops.CustomOpBuilder](https://mindspore.cn/docs/en/r2.6.0/api_python/ops/mindspore.ops.CustomOpBuilder.html#mindspore.ops.CustomOpBuilder)|New|CustomOpBuilder is used to initialize and configure custom operators for MindSpore.|r2.6.0: Ascend/CPU|Customizing Operator +[mindspore.ops.custom_info_register](https://mindspore.cn/docs/en/r2.6.0/api_python/ops/mindspore.ops.custom_info_register.html#mindspore.ops.custom_info_register)|Changed|A decorator which is used to bind the registration information to the func parameter of mindspore.ops.Custom.|r2.5.0: => r2.6.0: Ascend/GPU/CPU|Customizing Operator +[mindspore.ops.kernel](https://mindspore.cn/docs/en/r2.6.0/api_python/ops/mindspore.ops.kernel.html#mindspore.ops.kernel)|Changed|The decorator of the Hybrid DSL function for the Custom Op.|r2.5.0: Ascend/GPU/CPU => r2.6.0: GPU/CPU|Customizing Operator +[mindspore.ops.Svd](https://mindspore.cn/docs/en/r2.6.0/api_python/ops/mindspore.ops.Svd.html#mindspore.ops.Svd)|Changed|Computes the singular value decompositions of one or more matrices.|r2.5.0: GPU/CPU => r2.6.0: Ascend/GPU/CPU|Linear Algebraic Operator +[mindspore.ops.Morph](https://mindspore.cn/docs/en/r2.6.0/api_python/ops/mindspore.ops.Morph.html#mindspore.ops.Morph)|New|The Morph Primitive is used to encapsulate a user-defined function fn, allowing it to be used as a custom Primitive.|r2.6.0: |operations--Frame Operators -- Gitee