194 Star 1.3K Fork 1.1K

GVPAscend/MindSpeed-LLM

reset attention maks padding 可合并
ascend-cla/yes
!2901 shenjiarun 12
审查:
2025-06-24 17:44
[pytorch][feature]add noop-layer and tp-extend-ep in mg-hf of ckpt 可合并
ci-pipeline-passed
ascend-cla/yes
lgtm
!2895 qu_yueze 13
审查:
2025-06-24 14:37
[pytorch][feature]add noop-layer and tp-extend-ep in mg-hf of ckpt 可合并
ci-pipeline-passed
ascend-cla/yes
!2886 qu_yueze 45
审查:
2025-06-23 10:32
[pytorch][feature]add qwen3 reasoning template 可合并
ascend-cla/yes
stat/needs-squash
!2880 HanhuiChen 55
审查:
2025-06-21 11:14
support ring attention parallel for context parallel 可合并
ci-pipeline-failed
ascend-cla/yes
!2857 little_nik 20
审查:
2025-06-19 15:30
feat: balanced moe 存在冲突
ci-pipeline-failed
ascend-cla/yes
!2856 邓佳 11
审查:
2025-06-19 14:38
optimize when CP and variable seq length functions are both enabled 可合并
ci-pipeline-passed
ascend-cla/yes
stat/needs-squash
!2853 xiecheng 39
审查:
2025-06-19 10:55
[mindspore][sh]Remove precision alignment modification points 可合并
ci-pipeline-failed
ascend-cla/yes
!2851 huangyuhao 19
审查:
2025-06-19 09:50
[mindspore][feature] add utils to update wrapper 可合并
ascend-cla/yes
stat/needs-squash
!2849 wangjialin 4
审查:
2025-06-19 09:26
[pytorch][bugfix]moe token drop for expert bias 存在冲突
ci-pipeline-failed
ascend-cla/yes
stat/needs-squash
!2835 shengjy 29
审查:
2025-06-17 13:57
Python
1
https://gitee.com/ascend/MindSpeed-LLM.git
git@gitee.com:ascend/MindSpeed-LLM.git
ascend
MindSpeed-LLM
MindSpeed-LLM

搜索帮助