diff --git a/README.md b/README.md index 88b380540d4cf759133891913e95156c82c125fb..2f4f0a051045fab0f69eaaea1981a44a537a616d 100644 --- a/README.md +++ b/README.md @@ -26,27 +26,27 @@ DeepSparkInference将按季度进行版本更新,后续会逐步丰富模型 | Model | vLLM | TRT-LLM | TGI | IXUCA SDK | |-------------------------------|--------------------------------------------------------|---------------------------------------|------------------------------------|-----------| -| Baichuan2-7B | [✅](models/nlp/llm/baichuan2-7b/vllm) | | | 4.2.0 | -| ChatGLM-3-6B | [✅](models/nlp/llm/chatglm3-6b/vllm) | | | 4.2.0 | -| ChatGLM-3-6B-32K | [✅](models/nlp/llm/chatglm3-6b-32k/vllm) | | | 4.2.0 | -| DeepSeek-R1-Distill-Llama-8B | [✅](models/nlp/llm/deepseek-r1-distill-llama-8b/vllm) | | | 4.2.0 | -| DeepSeek-R1-Distill-Llama-70B | [✅](models/nlp/llm/deepseek-r1-distill-llama-70b/vllm) | | | 4.2.0 | -| DeepSeek-R1-Distill-Qwen-1.5B | [✅](models/nlp/llm/deepseek-r1-distill-qwen-1.5b/vllm) | | | 4.2.0 | -| DeepSeek-R1-Distill-Qwen-7B | [✅](models/nlp/llm/deepseek-r1-distill-qwen-7b/vllm) | | | 4.2.0 | -| DeepSeek-R1-Distill-Qwen-14B | [✅](models/nlp/llm/deepseek-r1-distill-qwen-14b/vllm) | | | 4.2.0 | -| DeepSeek-R1-Distill-Qwen-32B | [✅](models/nlp/llm/deepseek-r1-distill-qwen-32b/vllm) | | | 4.2.0 | -| Llama2-7B | [✅](models/nlp/llm/llama2-7b/vllm) | [✅](models/nlp/llm/llama2-7b/trtllm) | | 4.2.0 | -| Llama2-13B | | [✅](models/nlp/llm/llama2-13b/trtllm) | | 4.2.0 | -| Llama2-70B | | [✅](models/nlp/llm/llama2-70b/trtllm) | | 4.2.0 | -| Llama3-70B | [✅](models/nlp/llm/llama3-70b/vllm) | | | 4.2.0 | -| Qwen-7B | [✅](models/nlp/llm/qwen-7b/vllm) | | | 4.2.0 | -| Qwen1.5-7B | [✅](models/nlp/llm/qwen1.5-7b/vllm) | | [✅](models/nlp/llm/qwen1.5-7b/tgi) | 4.2.0 | -| Qwen1.5-14B | [✅](models/nlp/llm/qwen1.5-14b/vllm) | | | 4.2.0 | -| Qwen1.5-32B Chat | [✅](models/nlp/llm/qwen1.5-32b/vllm) | | | 4.2.0 | -| Qwen1.5-72B | [✅](models/nlp/llm/qwen1.5-72b/vllm) | | | 4.2.0 | -| Qwen2-7B Instruct | [✅](models/nlp/llm/qwen2-7b/vllm) | | | 4.2.0 | -| Qwen2-72B Instruct | [✅](models/nlp/llm/qwen2-72b/vllm) | | | 4.2.0 | -| StableLM2-1.6B | [✅](models/nlp/llm/stablelm/vllm) | | | 4.2.0 | +| Baichuan2-7B | [✅](models/nlp/llm/baichuan2-7b/vllm) | | | 4.3.0 | +| ChatGLM-3-6B | [✅](models/nlp/llm/chatglm3-6b/vllm) | | | 4.3.0 | +| ChatGLM-3-6B-32K | [✅](models/nlp/llm/chatglm3-6b-32k/vllm) | | | 4.3.0 | +| DeepSeek-R1-Distill-Llama-8B | [✅](models/nlp/llm/deepseek-r1-distill-llama-8b/vllm) | | | 4.3.0 | +| DeepSeek-R1-Distill-Llama-70B | [✅](models/nlp/llm/deepseek-r1-distill-llama-70b/vllm) | | | 4.3.0 | +| DeepSeek-R1-Distill-Qwen-1.5B | [✅](models/nlp/llm/deepseek-r1-distill-qwen-1.5b/vllm) | | | 4.3.0 | +| DeepSeek-R1-Distill-Qwen-7B | [✅](models/nlp/llm/deepseek-r1-distill-qwen-7b/vllm) | | | 4.3.0 | +| DeepSeek-R1-Distill-Qwen-14B | [✅](models/nlp/llm/deepseek-r1-distill-qwen-14b/vllm) | | | 4.3.0 | +| DeepSeek-R1-Distill-Qwen-32B | [✅](models/nlp/llm/deepseek-r1-distill-qwen-32b/vllm) | | | 4.3.0 | +| Llama2-7B | [✅](models/nlp/llm/llama2-7b/vllm) | [✅](models/nlp/llm/llama2-7b/trtllm) | | 4.3.0 | +| Llama2-13B | | [✅](models/nlp/llm/llama2-13b/trtllm) | | 4.3.0 | +| Llama2-70B | | [✅](models/nlp/llm/llama2-70b/trtllm) | | 4.3.0 | +| Llama3-70B | [✅](models/nlp/llm/llama3-70b/vllm) | | | 4.3.0 | +| Qwen-7B | [✅](models/nlp/llm/qwen-7b/vllm) | | | 4.3.0 | +| Qwen1.5-7B | [✅](models/nlp/llm/qwen1.5-7b/vllm) | | [✅](models/nlp/llm/qwen1.5-7b/tgi) | 4.3.0 | +| Qwen1.5-14B | [✅](models/nlp/llm/qwen1.5-14b/vllm) | | | 4.3.0 | +| Qwen1.5-32B Chat | [✅](models/nlp/llm/qwen1.5-32b/vllm) | | | 4.3.0 | +| Qwen1.5-72B | [✅](models/nlp/llm/qwen1.5-72b/vllm) | | | 4.3.0 | +| Qwen2-7B Instruct | [✅](models/nlp/llm/qwen2-7b/vllm) | | | 4.3.0 | +| Qwen2-72B Instruct | [✅](models/nlp/llm/qwen2-72b/vllm) | | | 4.3.0 | +| StableLM2-1.6B | [✅](models/nlp/llm/stablelm/vllm) | | | 4.3.0 | ### 计算机视觉(CV) @@ -54,200 +54,200 @@ DeepSparkInference将按季度进行版本更新,后续会逐步丰富模型 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |------------------------|-------|--------------------------------------------------------|-----------------------------------------------------------|-----------| -| AlexNet | FP16 | [✅](models/cv/classification/alexnet/igie) | [✅](models/cv/classification/alexnet/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/alexnet/igie) | [✅](models/cv/classification/alexnet/ixrt) | 4.2.0 | -| CLIP | FP16 | [✅](models/cv/classification/clip/igie) | | 4.2.0 | -| Conformer-B | FP16 | [✅](models/cv/classification/conformer_base/igie) | | 4.2.0 | -| ConvNeXt-Base | FP16 | [✅](models/cv/classification/convnext_base/igie) | [✅](models/cv/classification/convnext_base/ixrt) | 4.2.0 | -| ConvNext-S | FP16 | [✅](models/cv/classification/convnext_s/igie) | | 4.2.0 | -| ConvNeXt-Small | FP16 | [✅](models/cv/classification/convnext_small/igie) | [✅](models/cv/classification/convnext_small/ixrt) | 4.2.0 | -| ConvNeXt-Tiny | FP16 | [✅](models/cv/classification/convnext_tiny/igie) | | 4.2.0 | -| CSPDarkNet53 | FP16 | [✅](models/cv/classification/cspdarknet53/igie) | [✅](models/cv/classification/cspdarknet53/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/cspdarknet53/ixrt) | 4.2.0 | -| CSPResNet50 | FP16 | [✅](models/cv/classification/cspresnet50/igie) | [✅](models/cv/classification/cspresnet50/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/cspresnet50/ixrt) | 4.2.0 | -| CSPResNeXt50 | FP16 | [✅](models/cv/classification/cspresnext50/igie) | | 4.2.0 | -| DeiT-tiny | FP16 | [✅](models/cv/classification/deit_tiny/igie) | [✅](models/cv/classification/deit_tiny/ixrt) | 4.2.0 | -| DenseNet121 | FP16 | [✅](models/cv/classification/densenet121/igie) | [✅](models/cv/classification/densenet121/ixrt) | 4.2.0 | -| DenseNet161 | FP16 | [✅](models/cv/classification/densenet161/igie) | [✅](models/cv/classification/densenet161/ixrt) | 4.2.0 | -| DenseNet169 | FP16 | [✅](models/cv/classification/densenet169/igie) | [✅](models/cv/classification/densenet169/ixrt) | 4.2.0 | -| DenseNet201 | FP16 | [✅](models/cv/classification/densenet201/igie) | [✅](models/cv/classification/densenet201/ixrt) | 4.2.0 | -| EfficientNet-B0 | FP16 | [✅](models/cv/classification/efficientnet_b0/igie) | [✅](models/cv/classification/efficientnet_b0/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/efficientnet_b0/ixrt) | 4.2.0 | -| EfficientNet-B1 | FP16 | [✅](models/cv/classification/efficientnet_b1/igie) | [✅](models/cv/classification/efficientnet_b1/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/efficientnet_b1/ixrt) | 4.2.0 | -| EfficientNet-B2 | FP16 | [✅](models/cv/classification/efficientnet_b2/igie) | [✅](models/cv/classification/efficientnet_b2/ixrt) | 4.2.0 | -| EfficientNet-B3 | FP16 | [✅](models/cv/classification/efficientnet_b3/igie) | [✅](models/cv/classification/efficientnet_b3/ixrt) | 4.2.0 | -| EfficientNet-B4 | FP16 | [✅](models/cv/classification/efficientnet_b4/igie) | | 4.2.0 | -| EfficientNet-B5 | FP16 | [✅](models/cv/classification/efficientnet_b5/igie) | | 4.2.0 | -| EfficientNetV2 | FP16 | [✅](models/cv/classification/efficientnet_v2/igie) | [✅](models/cv/classification/efficientnet_v2/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/efficientnet_v2/ixrt) | 4.2.0 | -| EfficientNetv2_rw_t | FP16 | [✅](models/cv/classification/efficientnetv2_rw_t/igie) | [✅](models/cv/classification/efficientnetv2_rw_t/ixrt) | 4.2.0 | -| EfficientNetv2_s | FP16 | [✅](models/cv/classification/efficientnet_v2_s/igie) | [✅](models/cv/classification/efficientnet_v2_s/ixrt) | 4.2.0 | -| GoogLeNet | FP16 | [✅](models/cv/classification/googlenet/igie) | [✅](models/cv/classification/googlenet/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/googlenet/igie) | [✅](models/cv/classification/googlenet/ixrt) | 4.2.0 | -| HRNet-W18 | FP16 | [✅](models/cv/classification/hrnet_w18/igie) | [✅](models/cv/classification/hrnet_w18/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/hrnet_w18/ixrt) | 4.2.0 | -| InceptionV3 | FP16 | [✅](models/cv/classification/inception_v3/igie) | [✅](models/cv/classification/inception_v3/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/inception_v3/igie) | [✅](models/cv/classification/inception_v3/ixrt) | 4.2.0 | -| Inception-ResNet-V2 | FP16 | | [✅](models/cv/classification/inception_resnet_v2/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/inception_resnet_v2/ixrt) | 4.2.0 | -| Mixer_B | FP16 | [✅](models/cv/classification/mlp_mixer_base/igie) | | 4.2.0 | -| MNASNet0_5 | FP16 | [✅](models/cv/classification/mnasnet0_5/igie) | | 4.2.0 | -| MNASNet0_75 | FP16 | [✅](models/cv/classification/mnasnet0_75/igie) | | 4.2.0 | -| MNASNet1_0 | FP16 | [✅](models/cv/classification/mnasnet1_0/igie) | | 4.2.0 | -| MobileNetV2 | FP16 | [✅](models/cv/classification/mobilenet_v2/igie) | [✅](models/cv/classification/mobilenet_v2/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/mobilenet_v2/igie) | [✅](models/cv/classification/mobilenet_v2/ixrt) | 4.2.0 | -| MobileNetV3_Large | FP16 | [✅](models/cv/classification/mobilenet_v3_large/igie) | | 4.2.0 | -| MobileNetV3_Small | FP16 | [✅](models/cv/classification/mobilenet_v3/igie) | [✅](models/cv/classification/mobilenet_v3/ixrt) | 4.2.0 | +| AlexNet | FP16 | [✅](models/cv/classification/alexnet/igie) | [✅](models/cv/classification/alexnet/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/alexnet/igie) | [✅](models/cv/classification/alexnet/ixrt) | 4.3.0 | +| CLIP | FP16 | [✅](models/cv/classification/clip/igie) | | 4.3.0 | +| Conformer-B | FP16 | [✅](models/cv/classification/conformer_base/igie) | | 4.3.0 | +| ConvNeXt-Base | FP16 | [✅](models/cv/classification/convnext_base/igie) | [✅](models/cv/classification/convnext_base/ixrt) | 4.3.0 | +| ConvNext-S | FP16 | [✅](models/cv/classification/convnext_s/igie) | | 4.3.0 | +| ConvNeXt-Small | FP16 | [✅](models/cv/classification/convnext_small/igie) | [✅](models/cv/classification/convnext_small/ixrt) | 4.3.0 | +| ConvNeXt-Tiny | FP16 | [✅](models/cv/classification/convnext_tiny/igie) | | 4.3.0 | +| CSPDarkNet53 | FP16 | [✅](models/cv/classification/cspdarknet53/igie) | [✅](models/cv/classification/cspdarknet53/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/cspdarknet53/ixrt) | 4.3.0 | +| CSPResNet50 | FP16 | [✅](models/cv/classification/cspresnet50/igie) | [✅](models/cv/classification/cspresnet50/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/cspresnet50/ixrt) | 4.3.0 | +| CSPResNeXt50 | FP16 | [✅](models/cv/classification/cspresnext50/igie) | | 4.3.0 | +| DeiT-tiny | FP16 | [✅](models/cv/classification/deit_tiny/igie) | [✅](models/cv/classification/deit_tiny/ixrt) | 4.3.0 | +| DenseNet121 | FP16 | [✅](models/cv/classification/densenet121/igie) | [✅](models/cv/classification/densenet121/ixrt) | 4.3.0 | +| DenseNet161 | FP16 | [✅](models/cv/classification/densenet161/igie) | [✅](models/cv/classification/densenet161/ixrt) | 4.3.0 | +| DenseNet169 | FP16 | [✅](models/cv/classification/densenet169/igie) | [✅](models/cv/classification/densenet169/ixrt) | 4.3.0 | +| DenseNet201 | FP16 | [✅](models/cv/classification/densenet201/igie) | [✅](models/cv/classification/densenet201/ixrt) | 4.3.0 | +| EfficientNet-B0 | FP16 | [✅](models/cv/classification/efficientnet_b0/igie) | [✅](models/cv/classification/efficientnet_b0/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/efficientnet_b0/ixrt) | 4.3.0 | +| EfficientNet-B1 | FP16 | [✅](models/cv/classification/efficientnet_b1/igie) | [✅](models/cv/classification/efficientnet_b1/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/efficientnet_b1/ixrt) | 4.3.0 | +| EfficientNet-B2 | FP16 | [✅](models/cv/classification/efficientnet_b2/igie) | [✅](models/cv/classification/efficientnet_b2/ixrt) | 4.3.0 | +| EfficientNet-B3 | FP16 | [✅](models/cv/classification/efficientnet_b3/igie) | [✅](models/cv/classification/efficientnet_b3/ixrt) | 4.3.0 | +| EfficientNet-B4 | FP16 | [✅](models/cv/classification/efficientnet_b4/igie) | | 4.3.0 | +| EfficientNet-B5 | FP16 | [✅](models/cv/classification/efficientnet_b5/igie) | | 4.3.0 | +| EfficientNetV2 | FP16 | [✅](models/cv/classification/efficientnet_v2/igie) | [✅](models/cv/classification/efficientnet_v2/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/efficientnet_v2/ixrt) | 4.3.0 | +| EfficientNetv2_rw_t | FP16 | [✅](models/cv/classification/efficientnetv2_rw_t/igie) | [✅](models/cv/classification/efficientnetv2_rw_t/ixrt) | 4.3.0 | +| EfficientNetv2_s | FP16 | [✅](models/cv/classification/efficientnet_v2_s/igie) | [✅](models/cv/classification/efficientnet_v2_s/ixrt) | 4.3.0 | +| GoogLeNet | FP16 | [✅](models/cv/classification/googlenet/igie) | [✅](models/cv/classification/googlenet/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/googlenet/igie) | [✅](models/cv/classification/googlenet/ixrt) | 4.3.0 | +| HRNet-W18 | FP16 | [✅](models/cv/classification/hrnet_w18/igie) | [✅](models/cv/classification/hrnet_w18/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/hrnet_w18/ixrt) | 4.3.0 | +| InceptionV3 | FP16 | [✅](models/cv/classification/inception_v3/igie) | [✅](models/cv/classification/inception_v3/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/inception_v3/igie) | [✅](models/cv/classification/inception_v3/ixrt) | 4.3.0 | +| Inception-ResNet-V2 | FP16 | | [✅](models/cv/classification/inception_resnet_v2/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/inception_resnet_v2/ixrt) | 4.3.0 | +| Mixer_B | FP16 | [✅](models/cv/classification/mlp_mixer_base/igie) | | 4.3.0 | +| MNASNet0_5 | FP16 | [✅](models/cv/classification/mnasnet0_5/igie) | | 4.3.0 | +| MNASNet0_75 | FP16 | [✅](models/cv/classification/mnasnet0_75/igie) | | 4.3.0 | +| MNASNet1_0 | FP16 | [✅](models/cv/classification/mnasnet1_0/igie) | | 4.3.0 | +| MobileNetV2 | FP16 | [✅](models/cv/classification/mobilenet_v2/igie) | [✅](models/cv/classification/mobilenet_v2/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/mobilenet_v2/igie) | [✅](models/cv/classification/mobilenet_v2/ixrt) | 4.3.0 | +| MobileNetV3_Large | FP16 | [✅](models/cv/classification/mobilenet_v3_large/igie) | | 4.3.0 | +| MobileNetV3_Small | FP16 | [✅](models/cv/classification/mobilenet_v3/igie) | [✅](models/cv/classification/mobilenet_v3/ixrt) | 4.3.0 | | MViTv2_base | FP16 | [✅](models/cv/classification/mvitv2_base/igie) | | 4.2.0 | -| RegNet_x_16gf | FP16 | [✅](models/cv/classification/regnet_x_16gf/igie) | | 4.2.0 | -| RegNet_x_1_6gf | FP16 | [✅](models/cv/classification/regnet_x_1_6gf/igie) | | 4.2.0 | -| RegNet_x_3_2gf | FP16 | [✅](models/cv/classification/regnet_x_3_2gf/igie) | | 4.2.0 | -| RegNet_y_1_6gf | FP16 | [✅](models/cv/classification/regnet_y_1_6gf/igie) | | 4.2.0 | -| RegNet_y_16gf | FP16 | [✅](models/cv/classification/regnet_y_16gf/igie) | | 4.2.0 | -| RepVGG | FP16 | [✅](models/cv/classification/repvgg/igie) | [✅](models/cv/classification/repvgg/ixrt) | 4.2.0 | -| Res2Net50 | FP16 | [✅](models/cv/classification/res2net50/igie) | [✅](models/cv/classification/res2net50/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/res2net50/ixrt) | 4.2.0 | -| ResNeSt50 | FP16 | [✅](models/cv/classification/resnest50/igie) | | 4.2.0 | -| ResNet101 | FP16 | [✅](models/cv/classification/resnet101/igie) | [✅](models/cv/classification/resnet101/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/resnet101/igie) | [✅](models/cv/classification/resnet101/ixrt) | 4.2.0 | -| ResNet152 | FP16 | [✅](models/cv/classification/resnet152/igie) | | 4.2.0 | -| | INT8 | [✅](models/cv/classification/resnet152/igie) | | 4.2.0 | -| ResNet18 | FP16 | [✅](models/cv/classification/resnet18/igie) | [✅](models/cv/classification/resnet18/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/resnet18/igie) | [✅](models/cv/classification/resnet18/ixrt) | 4.2.0 | -| ResNet34 | FP16 | | [✅](models/cv/classification/resnet34/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/resnet34/ixrt) | 4.2.0 | -| ResNet50 | FP16 | [✅](models/cv/classification/resnet50/igie) | [✅](models/cv/classification/resnet50/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/resnet50/igie) | | 4.2.0 | -| ResNetV1D50 | FP16 | [✅](models/cv/classification/resnetv1d50/igie) | [✅](models/cv/classification/resnetv1d50/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/resnetv1d50/ixrt) | 4.2.0 | -| ResNeXt50_32x4d | FP16 | [✅](models/cv/classification/resnext50_32x4d/igie) | [✅](models/cv/classification/resnext50_32x4d/ixrt) | 4.2.0 | -| ResNeXt101_64x4d | FP16 | [✅](models/cv/classification/resnext101_64x4d/igie) | [✅](models/cv/classification/resnext101_64x4d/ixrt) | 4.2.0 | -| ResNeXt101_32x8d | FP16 | [✅](models/cv/classification/resnext101_32x8d/igie) | [✅](models/cv/classification/resnext101_32x8d/ixrt) | 4.2.0 | -| SEResNet50 | FP16 | [✅](models/cv/classification/se_resnet50/igie) | | 4.2.0 | -| ShuffleNetV1 | FP16 | | [✅](models/cv/classification/shufflenet_v1/ixrt) | 4.2.0 | -| ShuffleNetV2_x0_5 | FP16 | [✅](models/cv/classification/shufflenetv2_x0_5/igie) | [✅](models/cv/classification/shufflenetv2_x0_5/ixrt) | 4.2.0 | -| ShuffleNetV2_x1_0 | FP16 | [✅](models/cv/classification/shufflenetv2_x1_0/igie) | [✅](models/cv/classification/shufflenetv2_x1_0/ixrt) | 4.2.0 | -| ShuffleNetV2_x1_5 | FP16 | [✅](models/cv/classification/shufflenetv2_x1_5/igie) | [✅](models/cv/classification/shufflenetv2_x1_5/ixrt) | 4.2.0 | -| ShuffleNetV2_x2_0 | FP16 | [✅](models/cv/classification/shufflenetv2_x2_0/igie) | [✅](models/cv/classification/shufflenetv2_x2_0/ixrt) | 4.2.0 | -| SqueezeNet 1.0 | FP16 | [✅](models/cv/classification/squeezenet_v1_0/igie) | [✅](models/cv/classification/squeezenet_v1_0/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/squeezenet_v1_0/ixrt) | 4.2.0 | -| SqueezeNet 1.1 | FP16 | [✅](models/cv/classification/squeezenet_v1_1/igie) | [✅](models/cv/classification/squeezenet_v1_1/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/squeezenet_v1_1/ixrt) | 4.2.0 | -| SVT Base | FP16 | [✅](models/cv/classification/svt_base/igie) | | 4.2.0 | -| Swin Transformer | FP16 | [✅](models/cv/classification/swin_transformer/igie) | | 4.2.0 | -| Swin Transformer Large | FP16 | | [✅](models/cv/classification/swin_transformer_large/ixrt) | 4.2.0 | -| Twins_PCPVT | FP16 | [✅](models/cv/classification/twins_pcpvt/igie) | | 4.2.0 | -| VAN_B0 | FP16 | [✅](models/cv/classification/van_b0/igie) | | 4.2.0 | -| VGG11 | FP16 | [✅](models/cv/classification/vgg11/igie) | | 4.2.0 | -| VGG16 | FP16 | [✅](models/cv/classification/vgg16/igie) | [✅](models/cv/classification/vgg16/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/vgg16/igie) | | 4.2.0 | -| VGG19 | FP16 | [✅](models/cv/classification/vgg19/igie) | | 4.2.0 | -| VGG19_BN | FP16 | [✅](models/cv/classification/vgg19_bn/igie) | | 4.2.0 | -| ViT | FP16 | [✅](models/cv/classification/vit/igie) | | 4.2.0 | -| Wide ResNet50 | FP16 | [✅](models/cv/classification/wide_resnet50/igie) | [✅](models/cv/classification/wide_resnet50/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/wide_resnet50/igie) | [✅](models/cv/classification/wide_resnet50/ixrt) | 4.2.0 | -| Wide ResNet101 | FP16 | [✅](models/cv/classification/wide_resnet101/igie) | | 4.2.0 | +| RegNet_x_16gf | FP16 | [✅](models/cv/classification/regnet_x_16gf/igie) | | 4.3.0 | +| RegNet_x_1_6gf | FP16 | [✅](models/cv/classification/regnet_x_1_6gf/igie) | | 4.3.0 | +| RegNet_x_3_2gf | FP16 | [✅](models/cv/classification/regnet_x_3_2gf/igie) | | 4.3.0 | +| RegNet_y_1_6gf | FP16 | [✅](models/cv/classification/regnet_y_1_6gf/igie) | | 4.3.0 | +| RegNet_y_16gf | FP16 | [✅](models/cv/classification/regnet_y_16gf/igie) | | 4.3.0 | +| RepVGG | FP16 | [✅](models/cv/classification/repvgg/igie) | [✅](models/cv/classification/repvgg/ixrt) | 4.3.0 | +| Res2Net50 | FP16 | [✅](models/cv/classification/res2net50/igie) | [✅](models/cv/classification/res2net50/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/res2net50/ixrt) | 4.3.0 | +| ResNeSt50 | FP16 | [✅](models/cv/classification/resnest50/igie) | | 4.3.0 | +| ResNet101 | FP16 | [✅](models/cv/classification/resnet101/igie) | [✅](models/cv/classification/resnet101/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/resnet101/igie) | [✅](models/cv/classification/resnet101/ixrt) | 4.3.0 | +| ResNet152 | FP16 | [✅](models/cv/classification/resnet152/igie) | | 4.3.0 | +| | INT8 | [✅](models/cv/classification/resnet152/igie) | | 4.3.0 | +| ResNet18 | FP16 | [✅](models/cv/classification/resnet18/igie) | [✅](models/cv/classification/resnet18/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/resnet18/igie) | [✅](models/cv/classification/resnet18/ixrt) | 4.3.0 | +| ResNet34 | FP16 | | [✅](models/cv/classification/resnet34/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/resnet34/ixrt) | 4.3.0 | +| ResNet50 | FP16 | [✅](models/cv/classification/resnet50/igie) | [✅](models/cv/classification/resnet50/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/resnet50/igie) | | 4.3.0 | +| ResNetV1D50 | FP16 | [✅](models/cv/classification/resnetv1d50/igie) | [✅](models/cv/classification/resnetv1d50/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/resnetv1d50/ixrt) | 4.3.0 | +| ResNeXt50_32x4d | FP16 | [✅](models/cv/classification/resnext50_32x4d/igie) | [✅](models/cv/classification/resnext50_32x4d/ixrt) | 4.3.0 | +| ResNeXt101_64x4d | FP16 | [✅](models/cv/classification/resnext101_64x4d/igie) | [✅](models/cv/classification/resnext101_64x4d/ixrt) | 4.3.0 | +| ResNeXt101_32x8d | FP16 | [✅](models/cv/classification/resnext101_32x8d/igie) | [✅](models/cv/classification/resnext101_32x8d/ixrt) | 4.3.0 | +| SEResNet50 | FP16 | [✅](models/cv/classification/se_resnet50/igie) | | 4.3.0 | +| ShuffleNetV1 | FP16 | | [✅](models/cv/classification/shufflenet_v1/ixrt) | 4.3.0 | +| ShuffleNetV2_x0_5 | FP16 | [✅](models/cv/classification/shufflenetv2_x0_5/igie) | [✅](models/cv/classification/shufflenetv2_x0_5/ixrt) | 4.3.0 | +| ShuffleNetV2_x1_0 | FP16 | [✅](models/cv/classification/shufflenetv2_x1_0/igie) | [✅](models/cv/classification/shufflenetv2_x1_0/ixrt) | 4.3.0 | +| ShuffleNetV2_x1_5 | FP16 | [✅](models/cv/classification/shufflenetv2_x1_5/igie) | [✅](models/cv/classification/shufflenetv2_x1_5/ixrt) | 4.3.0 | +| ShuffleNetV2_x2_0 | FP16 | [✅](models/cv/classification/shufflenetv2_x2_0/igie) | [✅](models/cv/classification/shufflenetv2_x2_0/ixrt) | 4.3.0 | +| SqueezeNet 1.0 | FP16 | [✅](models/cv/classification/squeezenet_v1_0/igie) | [✅](models/cv/classification/squeezenet_v1_0/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/squeezenet_v1_0/ixrt) | 4.3.0 | +| SqueezeNet 1.1 | FP16 | [✅](models/cv/classification/squeezenet_v1_1/igie) | [✅](models/cv/classification/squeezenet_v1_1/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/squeezenet_v1_1/ixrt) | 4.3.0 | +| SVT Base | FP16 | [✅](models/cv/classification/svt_base/igie) | | 4.3.0 | +| Swin Transformer | FP16 | [✅](models/cv/classification/swin_transformer/igie) | | 4.3.0 | +| Swin Transformer Large | FP16 | | [✅](models/cv/classification/swin_transformer_large/ixrt) | 4.3.0 | +| Twins_PCPVT | FP16 | [✅](models/cv/classification/twins_pcpvt/igie) | | 4.3.0 | +| VAN_B0 | FP16 | [✅](models/cv/classification/van_b0/igie) | | 4.3.0 | +| VGG11 | FP16 | [✅](models/cv/classification/vgg11/igie) | | 4.3.0 | +| VGG16 | FP16 | [✅](models/cv/classification/vgg16/igie) | [✅](models/cv/classification/vgg16/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/vgg16/igie) | | 4.3.0 | +| VGG19 | FP16 | [✅](models/cv/classification/vgg19/igie) | | 4.3.0 | +| VGG19_BN | FP16 | [✅](models/cv/classification/vgg19_bn/igie) | | 4.3.0 | +| ViT | FP16 | [✅](models/cv/classification/vit/igie) | | 4.3.0 | +| Wide ResNet50 | FP16 | [✅](models/cv/classification/wide_resnet50/igie) | [✅](models/cv/classification/wide_resnet50/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/wide_resnet50/igie) | [✅](models/cv/classification/wide_resnet50/ixrt) | 4.3.0 | +| Wide ResNet101 | FP16 | [✅](models/cv/classification/wide_resnet101/igie) | | 4.3.0 | #### 目标检测 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |------------|-------|-------------------------------------------------|-------------------------------------------------|-----------| -| ATSS | FP16 | [✅](models/cv/object_detection/atss/igie) | | 4.2.0 | -| CenterNet | FP16 | [✅](models/cv/object_detection/centernet/igie) | [✅](models/cv/object_detection/centernet/ixrt) | 4.2.0 | -| DETR | FP16 | | [✅](models/cv/object_detection/detr/ixrt) | 4.2.0 | -| FCOS | FP16 | [✅](models/cv/object_detection/fcos/igie) | [✅](models/cv/object_detection/fcos/ixrt) | 4.2.0 | -| FoveaBox | FP16 | [✅](models/cv/object_detection/foveabox/igie) | [✅](models/cv/object_detection/foveabox/ixrt) | 4.2.0 | -| FSAF | FP16 | [✅](models/cv/object_detection/fsaf/igie) | [✅](models/cv/object_detection/fsaf/ixrt) | 4.2.0 | -| GFL | FP16 | [✅](models/cv/object_detection/gfl/igie) | | 4.2.0 | -| HRNet | FP16 | [✅](models/cv/object_detection/hrnet/igie) | [✅](models/cv/object_detection/hrnet/ixrt) | 4.2.0 | -| PAA | FP16 | [✅](models/cv/object_detection/paa/igie) | | 4.2.0 | -| RetinaFace | FP16 | [✅](models/cv/object_detection/retinaface/igie) | [✅](models/cv/object_detection/retinaface/ixrt) | 4.2.0 | -| RetinaNet | FP16 | [✅](models/cv/object_detection/retinanet/igie) | | 4.2.0 | -| RTMDet | FP16 | [✅](models/cv/object_detection/rtmdet/igie) | | 4.2.0 | -| SABL | FP16 | [✅](models/cv/object_detection/sabl/igie) | | 4.2.0 | -| YOLOv3 | FP16 | [✅](models/cv/object_detection/yolov3/igie) | [✅](models/cv/object_detection/yolov3/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/object_detection/yolov3/igie) | [✅](models/cv/object_detection/yolov3/ixrt) | 4.2.0 | -| YOLOv4 | FP16 | [✅](models/cv/object_detection/yolov4/igie) | [✅](models/cv/object_detection/yolov4/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/object_detection/yolov4/igie16) | [✅](models/cv/object_detection/yolov4/ixrt16) | 4.2.0 | -| YOLOv5 | FP16 | [✅](models/cv/object_detection/yolov5/igie) | [✅](models/cv/object_detection/yolov5/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/object_detection/yolov5/igie) | [✅](models/cv/object_detection/yolov5/ixrt) | 4.2.0 | -| YOLOv5s | FP16 | | [✅](models/cv/object_detection/yolov5s/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/object_detection/yolov5s/ixrt) | 4.2.0 | -| YOLOv6 | FP16 | [✅](models/cv/object_detection/yolov6/igie) | [✅](models/cv/object_detection/yolov6/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/object_detection/yolov6/ixrt) | 4.2.0 | -| YOLOv7 | FP16 | [✅](models/cv/object_detection/yolov7/igie) | [✅](models/cv/object_detection/yolov7/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/object_detection/yolov7/igie) | [✅](models/cv/object_detection/yolov7/ixrt) | 4.2.0 | -| YOLOv8 | FP16 | [✅](models/cv/object_detection/yolov8/igie) | [✅](models/cv/object_detection/yolov8/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/object_detection/yolov8/igie) | [✅](models/cv/object_detection/yolov8/ixrt) | 4.2.0 | -| YOLOv9 | FP16 | [✅](models/cv/object_detection/yolov9/igie) | [✅](models/cv/object_detection/yolov9/ixrt) | 4.2.0 | -| YOLOv10 | FP16 | [✅](models/cv/object_detection/yolov10/igie) | [✅](models/cv/object_detection/yolov10/ixrt) | 4.2.0 | -| YOLOv11 | FP16 | [✅](models/cv/object_detection/yolov11/igie) | [✅](models/cv/object_detection/yolov11/ixrt) | 4.2.0 | -| YOLOv12 | FP16 | [✅](models/cv/object_detection/yolov12/igie) | | 4.2.0 | -| YOLOX | FP16 | [✅](models/cv/object_detection/yolox/igie) | [✅](models/cv/object_detection/yolox/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/object_detection/yolox/igie) | [✅](models/cv/object_detection/yolox/ixrt) | 4.2.0 | +| ATSS | FP16 | [✅](models/cv/object_detection/atss/igie) | | 4.3.0 | +| CenterNet | FP16 | [✅](models/cv/object_detection/centernet/igie) | [✅](models/cv/object_detection/centernet/ixrt) | 4.3.0 | +| DETR | FP16 | | [✅](models/cv/object_detection/detr/ixrt) | 4.3.0 | +| FCOS | FP16 | [✅](models/cv/object_detection/fcos/igie) | [✅](models/cv/object_detection/fcos/ixrt) | 4.3.0 | +| FoveaBox | FP16 | [✅](models/cv/object_detection/foveabox/igie) | [✅](models/cv/object_detection/foveabox/ixrt) | 4.3.0 | +| FSAF | FP16 | [✅](models/cv/object_detection/fsaf/igie) | [✅](models/cv/object_detection/fsaf/ixrt) | 4.3.0 | +| GFL | FP16 | [✅](models/cv/object_detection/gfl/igie) | | 4.3.0 | +| HRNet | FP16 | [✅](models/cv/object_detection/hrnet/igie) | [✅](models/cv/object_detection/hrnet/ixrt) | 4.3.0 | +| PAA | FP16 | [✅](models/cv/object_detection/paa/igie) | | 4.3.0 | +| RetinaFace | FP16 | [✅](models/cv/object_detection/retinaface/igie) | [✅](models/cv/object_detection/retinaface/ixrt) | 4.3.0 | +| RetinaNet | FP16 | [✅](models/cv/object_detection/retinanet/igie) | | 4.3.0 | +| RTMDet | FP16 | [✅](models/cv/object_detection/rtmdet/igie) | | 4.3.0 | +| SABL | FP16 | [✅](models/cv/object_detection/sabl/igie) | | 4.3.0 | +| YOLOv3 | FP16 | [✅](models/cv/object_detection/yolov3/igie) | [✅](models/cv/object_detection/yolov3/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/object_detection/yolov3/igie) | [✅](models/cv/object_detection/yolov3/ixrt) | 4.3.0 | +| YOLOv4 | FP16 | [✅](models/cv/object_detection/yolov4/igie) | [✅](models/cv/object_detection/yolov4/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/object_detection/yolov4/igie16) | [✅](models/cv/object_detection/yolov4/ixrt16) | 4.3.0 | +| YOLOv5 | FP16 | [✅](models/cv/object_detection/yolov5/igie) | [✅](models/cv/object_detection/yolov5/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/object_detection/yolov5/igie) | [✅](models/cv/object_detection/yolov5/ixrt) | 4.3.0 | +| YOLOv5s | FP16 | | [✅](models/cv/object_detection/yolov5s/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/object_detection/yolov5s/ixrt) | 4.3.0 | +| YOLOv6 | FP16 | [✅](models/cv/object_detection/yolov6/igie) | [✅](models/cv/object_detection/yolov6/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/object_detection/yolov6/ixrt) | 4.3.0 | +| YOLOv7 | FP16 | [✅](models/cv/object_detection/yolov7/igie) | [✅](models/cv/object_detection/yolov7/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/object_detection/yolov7/igie) | [✅](models/cv/object_detection/yolov7/ixrt) | 4.3.0 | +| YOLOv8 | FP16 | [✅](models/cv/object_detection/yolov8/igie) | [✅](models/cv/object_detection/yolov8/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/object_detection/yolov8/igie) | [✅](models/cv/object_detection/yolov8/ixrt) | 4.3.0 | +| YOLOv9 | FP16 | [✅](models/cv/object_detection/yolov9/igie) | [✅](models/cv/object_detection/yolov9/ixrt) | 4.3.0 | +| YOLOv10 | FP16 | [✅](models/cv/object_detection/yolov10/igie) | [✅](models/cv/object_detection/yolov10/ixrt) | 4.3.0 | +| YOLOv11 | FP16 | [✅](models/cv/object_detection/yolov11/igie) | [✅](models/cv/object_detection/yolov11/ixrt) | 4.3.0 | +| YOLOv12 | FP16 | [✅](models/cv/object_detection/yolov12/igie) | | 4.3.0 | +| YOLOX | FP16 | [✅](models/cv/object_detection/yolox/igie) | [✅](models/cv/object_detection/yolox/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/object_detection/yolox/igie) | [✅](models/cv/object_detection/yolox/ixrt) | 4.3.0 | #### 人脸识别 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |---------|-------|------|----------------------------------------------|-----------| -| FaceNet | FP16 | | [✅](models/cv/face_recognition/facenet/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/face_recognition/facenet/ixrt) | 4.2.0 | +| FaceNet | FP16 | | [✅](models/cv/face_recognition/facenet/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/face_recognition/facenet/ixrt) | 4.3.0 | #### 光学字符识别(OCR) | Model | Prec. | IGIE | IXUCA SDK | |---------------|-------|---------------------------------------|-----------| -| Kie_layoutXLM | FP16 | [✅](models/cv/ocr/kie_layoutxlm/igie) | 4.2.0 | -| SVTR | FP16 | [✅](models/cv/ocr/svtr/igie) | 4.2.0 | +| Kie_layoutXLM | FP16 | [✅](models/cv/ocr/kie_layoutxlm/igie) | 4.3.0 | +| SVTR | FP16 | [✅](models/cv/ocr/svtr/igie) | 4.3.0 | #### 姿态估计 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |----------------------|-------|-----------------------------------------------|----------------------------------------------------------|-----------| -| HRNetPose | FP16 | [✅](models/cv/pose_estimation/hrnetpose/igie) | | 4.2.0 | -| Lightweight OpenPose | FP16 | | [✅](models/cv/pose_estimation/lightweight_openpose/ixrt) | 4.2.0 | -| RTMPose | FP16 | [✅](models/cv/pose_estimation/rtmpose/igie) | [✅](models/cv/pose_estimation/rtmpose/ixrt) | 4.2.0 | +| HRNetPose | FP16 | [✅](models/cv/pose_estimation/hrnetpose/igie) | | 4.3.0 | +| Lightweight OpenPose | FP16 | | [✅](models/cv/pose_estimation/lightweight_openpose/ixrt) | 4.3.0 | +| RTMPose | FP16 | [✅](models/cv/pose_estimation/rtmpose/igie) | [✅](models/cv/pose_estimation/rtmpose/ixrt) | 4.3.0 | #### 实例分割 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |------------|-------|------|-----------------------------------------------------|-----------| | Mask R-CNN | FP16 | | [✅](models/cv/instance_segmentation/mask_rcnn/ixrt) | 4.2.0 | -| SOLOv1 | FP16 | | [✅](models/cv/instance_segmentation/solov1/ixrt) | 4.2.0 | +| SOLOv1 | FP16 | | [✅](models/cv/instance_segmentation/solov1/ixrt) | 4.3.0 | #### 语义分割 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |-------|-------|------------------------------------------------|------|-----------| -| UNet | FP16 | [✅](models/cv/semantic_segmentation/unet/igie) | | 4.2.0 | +| UNet | FP16 | [✅](models/cv/semantic_segmentation/unet/igie) | | 4.3.0 | #### 多目标跟踪 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |---------------------|-------|----------------------------------------------------|------|-----------| -| FastReID | FP16 | [✅](models/cv/multi_object_tracking/fastreid/igie) | | 4.2.0 | -| DeepSort | FP16 | [✅](models/cv/multi_object_tracking/deepsort/igie) | | 4.2.0 | -| | INT8 | [✅](models/cv/multi_object_tracking/deepsort/igie) | | 4.2.0 | -| RepNet-Vehicle-ReID | FP16 | [✅](models/cv/multi_object_tracking/repnet/igie) | | 4.2.0 | +| FastReID | FP16 | [✅](models/cv/multi_object_tracking/fastreid/igie) | | 4.3.0 | +| DeepSort | FP16 | [✅](models/cv/multi_object_tracking/deepsort/igie) | | 4.3.0 | +| | INT8 | [✅](models/cv/multi_object_tracking/deepsort/igie) | | 4.3.0 | +| RepNet-Vehicle-ReID | FP16 | [✅](models/cv/multi_object_tracking/repnet/igie) | | 4.3.0 | ### 多模态 | Model | vLLM | IxFormer | IXUCA SDK | |---------------------|-----------------------------------------------------------------------|------------------------------------------------------------|-----------| -| Aria | [✅](models/multimodal/vision_language_model/aria/vllm) | | 4.2.0 | -| Chameleon-7B | [✅](models/multimodal/vision_language_model/chameleon_7b/vllm) | | 4.2.0 | -| CLIP | | [✅](models/multimodal/vision_language_model/clip/ixformer) | 4.2.0 | -| Fuyu-8B | [✅](models/multimodal/vision_language_model/fuyu_8b/vllm) | | 4.2.0 | -| H2OVL Mississippi | [✅](models/multimodal/vision_language_model/h2vol/vllm) | | 4.2.0 | -| Idefics3 | [✅](models/multimodal/vision_language_model/idefics3/vllm) | | 4.2.0 | -| InternVL2-4B | [✅](models/multimodal/vision_language_model/intern_vl/vllm) | | 4.2.0 | -| LLaVA | [✅](models/multimodal/vision_language_model/llava/vllm) | | 4.2.0 | -| LLaVA-Next-Video-7B | [✅](models/multimodal/vision_language_model/llava_next_video_7b/vllm) | | 4.2.0 | -| Llama-3.2 | [✅](models/multimodal/vision_language_model/llama-3.2/vllm) | | 4.2.0 | -| MiniCPM-V 2 | [✅](models/multimodal/vision_language_model/minicpm_v/vllm) | | 4.2.0 | -| Pixtral | [✅](models/multimodal/vision_language_model/pixtral/vllm) | | 4.2.0 | +| Aria | [✅](models/multimodal/vision_language_model/aria/vllm) | | 4.3.0 | +| Chameleon-7B | [✅](models/multimodal/vision_language_model/chameleon_7b/vllm) | | 4.3.0 | +| CLIP | | [✅](models/multimodal/vision_language_model/clip/ixformer) | 4.3.0 | +| Fuyu-8B | [✅](models/multimodal/vision_language_model/fuyu_8b/vllm) | | 4.3.0 | +| H2OVL Mississippi | [✅](models/multimodal/vision_language_model/h2vol/vllm) | | 4.3.0 | +| Idefics3 | [✅](models/multimodal/vision_language_model/idefics3/vllm) | | 4.3.0 | +| InternVL2-4B | [✅](models/multimodal/vision_language_model/intern_vl/vllm) | | 4.3.0 | +| LLaVA | [✅](models/multimodal/vision_language_model/llava/vllm) | | 4.3.0 | +| LLaVA-Next-Video-7B | [✅](models/multimodal/vision_language_model/llava_next_video_7b/vllm) | | 4.3.0 | +| Llama-3.2 | [✅](models/multimodal/vision_language_model/llama-3.2/vllm) | | 4.3.0 | +| MiniCPM-V 2 | [✅](models/multimodal/vision_language_model/minicpm_v/vllm) | | 4.3.0 | +| Pixtral | [✅](models/multimodal/vision_language_model/pixtral/vllm) | | 4.3.0 | ### 自然语言处理(NLP) @@ -255,15 +255,15 @@ DeepSparkInference将按季度进行版本更新,后续会逐步丰富模型 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |------------------|-------|-------------------------------------------|-------------------------------------------|-----------| -| ALBERT | FP16 | | [✅](models/nlp/plm/albert/ixrt) | 4.2.0 | -| BERT Base NER | INT8 | [✅](models/nlp/plm/bert_base_ner/igie) | | 4.2.0 | -| BERT Base SQuAD | FP16 | [✅](models/nlp/plm/bert_base_squad/igie) | [✅](models/nlp/plm/bert_base_squad/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/nlp/plm/bert_base_squad/ixrt) | 4.2.0 | -| BERT Large SQuAD | FP16 | [✅](models/nlp/plm/bert_large_squad/igie) | [✅](models/nlp/plm/bert_large_squad/ixrt) | 4.2.0 | -| | INT8 | [✅](models/nlp/plm/bert_large_squad/igie) | [✅](models/nlp/plm/bert_large_squad/ixrt) | 4.2.0 | -| DeBERTa | FP16 | | [✅](models/nlp/plm/deberta/ixrt) | 4.2.0 | -| RoBERTa | FP16 | | [✅](models/nlp/plm/roberta/ixrt) | 4.2.0 | -| RoFormer | FP16 | | [✅](models/nlp/plm/roformer/ixrt) | 4.2.0 | +| ALBERT | FP16 | | [✅](models/nlp/plm/albert/ixrt) | 4.3.0 | +| BERT Base NER | INT8 | [✅](models/nlp/plm/bert_base_ner/igie) | | 4.3.0 | +| BERT Base SQuAD | FP16 | [✅](models/nlp/plm/bert_base_squad/igie) | [✅](models/nlp/plm/bert_base_squad/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/nlp/plm/bert_base_squad/ixrt) | 4.3.0 | +| BERT Large SQuAD | FP16 | [✅](models/nlp/plm/bert_large_squad/igie) | [✅](models/nlp/plm/bert_large_squad/ixrt) | 4.3.0 | +| | INT8 | [✅](models/nlp/plm/bert_large_squad/igie) | [✅](models/nlp/plm/bert_large_squad/ixrt) | 4.3.0 | +| DeBERTa | FP16 | | [✅](models/nlp/plm/deberta/ixrt) | 4.3.0 | +| RoBERTa | FP16 | | [✅](models/nlp/plm/roberta/ixrt) | 4.3.0 | +| RoFormer | FP16 | | [✅](models/nlp/plm/roformer/ixrt) | 4.3.0 | | VideoBERT | FP16 | | [✅](models/nlp/plm/videobert/ixrt) | 4.2.0 | ### 语音 @@ -272,7 +272,7 @@ DeepSparkInference将按季度进行版本更新,后续会逐步丰富模型 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |-----------------|-------|-----------------------------------------------------|-----------------------------------------------------------|-----------| -| Conformer | FP16 | [✅](models/audio/speech_recognition/conformer/igie) | [✅](models/audio/speech_recognition/conformer/ixrt) | 4.2.0 | +| Conformer | FP16 | [✅](models/audio/speech_recognition/conformer/igie) | [✅](models/audio/speech_recognition/conformer/ixrt) | 4.3.0 | | Transformer ASR | FP16 | | [✅](models/audio/speech_recognition/transformer_asr/ixrt) | 4.2.0 | ### 其他 @@ -281,7 +281,7 @@ DeepSparkInference将按季度进行版本更新,后续会逐步丰富模型 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |-------------|-------|------|------------------------------------------------------|-----------| -| Wide & Deep | FP16 | | [✅](models/others/recommendation/wide_and_deep/ixrt) | 4.2.0 | +| Wide & Deep | FP16 | | [✅](models/others/recommendation/wide_and_deep/ixrt) | 4.3.0 | --- diff --git a/models/audio/speech_recognition/conformer/igie/README.md b/models/audio/speech_recognition/conformer/igie/README.md index ae96f9d4b9433e57973f2f7d6d1b5f1e206ef9aa..585d70d0cafec13c05fcc68f8b92b44603fa070a 100644 --- a/models/audio/speech_recognition/conformer/igie/README.md +++ b/models/audio/speech_recognition/conformer/igie/README.md @@ -11,7 +11,8 @@ Conformer applies convolution to the Encoder layer of Transformer, enhancing the | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/audio/speech_recognition/conformer/ixrt/README.md b/models/audio/speech_recognition/conformer/ixrt/README.md index d73a68deb6cee05e3875b37866e3850c3bd914e9..56ea26cc6bd25ae10da46ab20ffaa9dcc669b48c 100644 --- a/models/audio/speech_recognition/conformer/ixrt/README.md +++ b/models/audio/speech_recognition/conformer/ixrt/README.md @@ -8,7 +8,8 @@ Conformer is a speech recognition model proposed by Google in 2020. It combines | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/alexnet/igie/README.md b/models/cv/classification/alexnet/igie/README.md index 1c69881e8cdd7439251a482a258ee5d8a2a2ae76..c1f779cb49340ad3967c7e4340fd8e646987182a 100644 --- a/models/cv/classification/alexnet/igie/README.md +++ b/models/cv/classification/alexnet/igie/README.md @@ -12,7 +12,8 @@ non-linearity, allowing the model to learn complex features from input images. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/alexnet/ixrt/README.md b/models/cv/classification/alexnet/ixrt/README.md index 723c21455500dc443bb6c1763b7f4d1c4022abd9..34b11957351f66e4f4312d0793196e5f8e3c3f3c 100644 --- a/models/cv/classification/alexnet/ixrt/README.md +++ b/models/cv/classification/alexnet/ixrt/README.md @@ -9,7 +9,8 @@ layers as the basic building blocks. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/clip/igie/README.md b/models/cv/classification/clip/igie/README.md index 23bdeb482f56a00b51de34d1ebef280866b8a79f..8460c7635b65ce3818ad021422fa60ee791f8e42 100644 --- a/models/cv/classification/clip/igie/README.md +++ b/models/cv/classification/clip/igie/README.md @@ -8,7 +8,8 @@ CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/conformer_base/igie/README.md b/models/cv/classification/conformer_base/igie/README.md index cb81979d5ad681d2f463827c1566d799047fa9aa..d05a8756b8c8c7d36c189eeecb2902d805e37c9e 100644 --- a/models/cv/classification/conformer_base/igie/README.md +++ b/models/cv/classification/conformer_base/igie/README.md @@ -8,7 +8,8 @@ Conformer is a novel network architecture that addresses the limitations of conv | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/convnext_base/igie/README.md b/models/cv/classification/convnext_base/igie/README.md index 2a60872fab9fb96376fc57de1ec37adb972f0f73..c0774fcfaf1de8e6518e08db86b2ac0c974bdb6a 100644 --- a/models/cv/classification/convnext_base/igie/README.md +++ b/models/cv/classification/convnext_base/igie/README.md @@ -8,7 +8,8 @@ The ConvNeXt Base model represents a significant stride in the evolution of conv | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/convnext_base/ixrt/README.md b/models/cv/classification/convnext_base/ixrt/README.md index b90a29d139d458b1f805ce5b4a5133035be7031b..9dfc874ff0055b882002952fcd690830f566bb0b 100644 --- a/models/cv/classification/convnext_base/ixrt/README.md +++ b/models/cv/classification/convnext_base/ixrt/README.md @@ -8,7 +8,8 @@ The ConvNeXt Base model represents a significant stride in the evolution of conv | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/convnext_s/igie/README.md b/models/cv/classification/convnext_s/igie/README.md index 9222a9599af09fda79f09f4684b68cf065907f3a..34371ae5f7bfc5552c29adb04a4b5a9d13fbff91 100644 --- a/models/cv/classification/convnext_s/igie/README.md +++ b/models/cv/classification/convnext_s/igie/README.md @@ -8,7 +8,8 @@ ConvNeXt-S is a small-sized model in the ConvNeXt family, designed to balance pe | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/convnext_small/igie/README.md b/models/cv/classification/convnext_small/igie/README.md index c665dcb6f0f3f6177048e30010142d469c27207b..65edf23dbb0819c27142094ee787ec986e44d91b 100644 --- a/models/cv/classification/convnext_small/igie/README.md +++ b/models/cv/classification/convnext_small/igie/README.md @@ -8,7 +8,8 @@ The ConvNeXt Small model represents a significant stride in the evolution of con | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/convnext_small/ixrt/README.md b/models/cv/classification/convnext_small/ixrt/README.md index 9d2d4d353ed5bd45a1dc70651a47f29800fc3682..8f216b60bf32d6c841eaf82a2c767e067fbe23aa 100644 --- a/models/cv/classification/convnext_small/ixrt/README.md +++ b/models/cv/classification/convnext_small/ixrt/README.md @@ -8,7 +8,8 @@ The ConvNeXt Small model represents a significant stride in the evolution of con | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/convnext_tiny/igie/README.md b/models/cv/classification/convnext_tiny/igie/README.md index 3d0831352122b0981f1a3fbc58c4aaf0ed6229c2..a4427cebece780011fe6bbf09fa8c641cb52c362 100644 --- a/models/cv/classification/convnext_tiny/igie/README.md +++ b/models/cv/classification/convnext_tiny/igie/README.md @@ -8,7 +8,8 @@ ConvNeXt is a modern convolutional neural network architecture proposed by Faceb | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/cspdarknet53/igie/README.md b/models/cv/classification/cspdarknet53/igie/README.md index 07da984c34c040f7d3f9e3b5862eef19b63cdbe3..50d690414b8965d5b336803cd9f56eca6a26881e 100644 --- a/models/cv/classification/cspdarknet53/igie/README.md +++ b/models/cv/classification/cspdarknet53/igie/README.md @@ -8,7 +8,8 @@ CSPDarkNet53 is an enhanced convolutional neural network architecture that reduc | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/cspdarknet53/ixrt/README.md b/models/cv/classification/cspdarknet53/ixrt/README.md index 1cc98a0eda26f0eebcf8201de4da9a938277344c..861860d8a69e3a80b50d6f157c1ae95f0c7dfde5 100644 --- a/models/cv/classification/cspdarknet53/ixrt/README.md +++ b/models/cv/classification/cspdarknet53/ixrt/README.md @@ -8,7 +8,8 @@ CSPDarkNet53 is an enhanced convolutional neural network architecture that reduc | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/cspresnet50/igie/README.md b/models/cv/classification/cspresnet50/igie/README.md index 5c01bbd147810e72ef3b66a6632e53261bd427a7..41da2145a1c47791a6a063ec10376d5c34d3fb4f 100644 --- a/models/cv/classification/cspresnet50/igie/README.md +++ b/models/cv/classification/cspresnet50/igie/README.md @@ -10,7 +10,8 @@ computations, optimizes gradient flow, and enhances feature representation. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/cspresnet50/ixrt/README.md b/models/cv/classification/cspresnet50/ixrt/README.md index 9d8d5f1861b85046d8c35d1700c013892cd594dd..01bed75f4687b1b0f4627a9f6663038439dff216 100644 --- a/models/cv/classification/cspresnet50/ixrt/README.md +++ b/models/cv/classification/cspresnet50/ixrt/README.md @@ -9,7 +9,8 @@ CSPResNet50 is the one of best models. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/cspresnext50/igie/README.md b/models/cv/classification/cspresnext50/igie/README.md index 7a8ddc1e2dfcbb803fec896a84b0ef4c61d74991..0397bfa3000f1bc7ae40fd692e5955351fa4e9bc 100644 --- a/models/cv/classification/cspresnext50/igie/README.md +++ b/models/cv/classification/cspresnext50/igie/README.md @@ -8,7 +8,8 @@ CSPResNeXt50 is a convolutional neural network that combines the CSPNet and ResN | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/deit_tiny/igie/README.md b/models/cv/classification/deit_tiny/igie/README.md index 439a3cc76972601d3314e2fdceb732004a0bf97d..82215f9861e5dc7ef9dcbbb2156227442e8614a3 100644 --- a/models/cv/classification/deit_tiny/igie/README.md +++ b/models/cv/classification/deit_tiny/igie/README.md @@ -8,7 +8,8 @@ DeiT Tiny is a lightweight vision transformer designed for data-efficient learni | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/deit_tiny/ixrt/README.md b/models/cv/classification/deit_tiny/ixrt/README.md index b1874b6adfe73b958853777d9c29555ec795f030..5f5a92e92d7b1fc3fe7d743a0661310c20e67b70 100644 --- a/models/cv/classification/deit_tiny/ixrt/README.md +++ b/models/cv/classification/deit_tiny/ixrt/README.md @@ -8,7 +8,8 @@ DeiT Tiny is a lightweight vision transformer designed for data-efficient learni | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/densenet121/igie/README.md b/models/cv/classification/densenet121/igie/README.md index 1637a25fabce93f2d38d4fee2495e8efcbe4d87b..dc03776587cad9a16bd996be610ff55945963819 100644 --- a/models/cv/classification/densenet121/igie/README.md +++ b/models/cv/classification/densenet121/igie/README.md @@ -8,7 +8,8 @@ DenseNet-121 is a convolutional neural network architecture that belongs to the | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/densenet121/ixrt/README.md b/models/cv/classification/densenet121/ixrt/README.md index a5dbc7c7f19a4121e1d769ec50a9b7e2c308489b..cf204af81f5d6d7d80f9e3fdbd6764e689154af2 100644 --- a/models/cv/classification/densenet121/ixrt/README.md +++ b/models/cv/classification/densenet121/ixrt/README.md @@ -8,7 +8,8 @@ Dense Convolutional Network (DenseNet), connects each layer to every other layer | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/densenet161/igie/README.md b/models/cv/classification/densenet161/igie/README.md index c2f5a294b9fcf31e16ebc8fef8427fdf636b8775..9ecec725c32a96ef87db5a596ff87921998a1cb4 100644 --- a/models/cv/classification/densenet161/igie/README.md +++ b/models/cv/classification/densenet161/igie/README.md @@ -8,7 +8,8 @@ DenseNet161 is a convolutional neural network architecture that belongs to the f | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/densenet161/ixrt/README.md b/models/cv/classification/densenet161/ixrt/README.md index fc6f1877574b7ac28c41da600e6b5e3e7b286dcd..294d2d3ce9569053d16e302dc71cb8cc6988ef72 100644 --- a/models/cv/classification/densenet161/ixrt/README.md +++ b/models/cv/classification/densenet161/ixrt/README.md @@ -8,7 +8,8 @@ DenseNet161 is a convolutional neural network architecture that belongs to the f | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/densenet169/igie/README.md b/models/cv/classification/densenet169/igie/README.md index 5e961ca42a647ef04328e17a0eb1d62e4a0abf4d..fa6f98071a4d6de3a794eb01e61f3007119948c9 100644 --- a/models/cv/classification/densenet169/igie/README.md +++ b/models/cv/classification/densenet169/igie/README.md @@ -8,7 +8,8 @@ DenseNet-169 is a variant of the Dense Convolutional Network (DenseNet) architec | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/densenet169/ixrt/README.md b/models/cv/classification/densenet169/ixrt/README.md index 66289e6cea8793a76a1ac48e5256648f09c49d39..c105e417af3de08f1b6d2de36a11524e4c6a04b1 100644 --- a/models/cv/classification/densenet169/ixrt/README.md +++ b/models/cv/classification/densenet169/ixrt/README.md @@ -8,7 +8,8 @@ Dense Convolutional Network (DenseNet), connects each layer to every other layer | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/densenet201/igie/README.md b/models/cv/classification/densenet201/igie/README.md index fc54b25baf7227341683e2bb7ed80c10e64082c4..8040ad6eeae20bed0b127daffb1fc0c9fc8b2015 100644 --- a/models/cv/classification/densenet201/igie/README.md +++ b/models/cv/classification/densenet201/igie/README.md @@ -8,7 +8,8 @@ DenseNet201 is a deep convolutional neural network that stands out for its uniqu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/densenet201/ixrt/README.md b/models/cv/classification/densenet201/ixrt/README.md index f394306cbcb39981ffca2c9d3940ff08fdc0b7e3..7b9810b23d42668c1c8d66ad7c6bbc6a9bad851d 100644 --- a/models/cv/classification/densenet201/ixrt/README.md +++ b/models/cv/classification/densenet201/ixrt/README.md @@ -8,7 +8,8 @@ DenseNet201 is a deep convolutional neural network that stands out for its uniqu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b0/igie/README.md b/models/cv/classification/efficientnet_b0/igie/README.md index 413032a8c914960b8db055000d2b9dd5a8ab973d..60bb53733b4877bab580a0b4024c01238919ba94 100644 --- a/models/cv/classification/efficientnet_b0/igie/README.md +++ b/models/cv/classification/efficientnet_b0/igie/README.md @@ -8,7 +8,8 @@ EfficientNet-B0 is a lightweight yet highly efficient convolutional neural netwo | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b0/ixrt/README.md b/models/cv/classification/efficientnet_b0/ixrt/README.md index 2594d622322776959accd4c164c2b4106228c441..7606d841114d25983bc343c668257e8ece897e2e 100644 --- a/models/cv/classification/efficientnet_b0/ixrt/README.md +++ b/models/cv/classification/efficientnet_b0/ixrt/README.md @@ -8,7 +8,8 @@ EfficientNet B0 is a convolutional neural network architecture that belongs to t | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b1/igie/README.md b/models/cv/classification/efficientnet_b1/igie/README.md index 89e72b3bae0d496635330acabc29ff2fc72042ad..f36927952e4b070d1c4d0f23c7b688abd964408c 100644 --- a/models/cv/classification/efficientnet_b1/igie/README.md +++ b/models/cv/classification/efficientnet_b1/igie/README.md @@ -8,7 +8,8 @@ EfficientNet B1 is a convolutional neural network architecture that falls under | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b1/ixrt/README.md b/models/cv/classification/efficientnet_b1/ixrt/README.md index 884bf2a99dd886127c956613dca5b989272f202d..dbad178f54c4e46decca14124c87447a9769423e 100644 --- a/models/cv/classification/efficientnet_b1/ixrt/README.md +++ b/models/cv/classification/efficientnet_b1/ixrt/README.md @@ -8,7 +8,8 @@ EfficientNet B1 is one of the variants in the EfficientNet family of neural netw | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b2/igie/README.md b/models/cv/classification/efficientnet_b2/igie/README.md index fab2353be6e23801d9f571693734d28854edb8bd..f7acca02591c8ae5a701fe3ff6d1c43ea65245b1 100644 --- a/models/cv/classification/efficientnet_b2/igie/README.md +++ b/models/cv/classification/efficientnet_b2/igie/README.md @@ -8,7 +8,8 @@ EfficientNet B2 is a member of the EfficientNet family, a series of convolutiona | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b2/ixrt/README.md b/models/cv/classification/efficientnet_b2/ixrt/README.md index 510c77ba8a8557adaf382fdc57371a5e160d5561..80d95edf9e04317db3d9a68dfbc0170a2936b1cf 100644 --- a/models/cv/classification/efficientnet_b2/ixrt/README.md +++ b/models/cv/classification/efficientnet_b2/ixrt/README.md @@ -8,7 +8,8 @@ EfficientNet B2 is a member of the EfficientNet family, a series of convolutiona | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b3/igie/README.md b/models/cv/classification/efficientnet_b3/igie/README.md index 44c0fd3e4de2bc012e39a558ffdbc121c2f0a9e3..cf57fcb816212bc6da6abdcd5f27bb008d128e69 100644 --- a/models/cv/classification/efficientnet_b3/igie/README.md +++ b/models/cv/classification/efficientnet_b3/igie/README.md @@ -8,7 +8,8 @@ EfficientNet B3 is a member of the EfficientNet family, a series of convolutiona | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b3/ixrt/README.md b/models/cv/classification/efficientnet_b3/ixrt/README.md index 345086ab0c86b1988a7fd36f6f9b016f1ec6a88c..fd312942210ec98e1411790e466f27796b1359a0 100644 --- a/models/cv/classification/efficientnet_b3/ixrt/README.md +++ b/models/cv/classification/efficientnet_b3/ixrt/README.md @@ -8,7 +8,8 @@ EfficientNet B3 is a member of the EfficientNet family, a series of convolutiona | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b4/igie/README.md b/models/cv/classification/efficientnet_b4/igie/README.md index 68a12a6a49f967597c0a5bdfb6449f68287c33d4..741112783f5a59a26b6eeb0127efb0f74257a468 100644 --- a/models/cv/classification/efficientnet_b4/igie/README.md +++ b/models/cv/classification/efficientnet_b4/igie/README.md @@ -8,7 +8,8 @@ EfficientNet B4 is a high-performance convolutional neural network model introdu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b5/igie/README.md b/models/cv/classification/efficientnet_b5/igie/README.md index e2a626bfa60ae7ee4a34a538926b8a5e6a93cd40..03e21de40656b57cb0b3d72ef934e844b08f8638 100644 --- a/models/cv/classification/efficientnet_b5/igie/README.md +++ b/models/cv/classification/efficientnet_b5/igie/README.md @@ -8,7 +8,8 @@ EfficientNet B5 is an efficient convolutional network model designed through a c | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_v2/igie/README.md b/models/cv/classification/efficientnet_v2/igie/README.md index 752160a57dfe129aa1b6b6734dfc2f0e7028d9bb..7a9f3510a347016672a7c10ae52f7c695bd352fc 100644 --- a/models/cv/classification/efficientnet_v2/igie/README.md +++ b/models/cv/classification/efficientnet_v2/igie/README.md @@ -8,7 +8,8 @@ EfficientNetV2 M is an optimized model in the EfficientNetV2 series, which was d | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_v2/ixrt/README.md b/models/cv/classification/efficientnet_v2/ixrt/README.md index 3742df786b574514bdc77c885a4ffc20d3ddf6cf..cdcf5de826a16405a208f8a6f6d07024c8e09725 100755 --- a/models/cv/classification/efficientnet_v2/ixrt/README.md +++ b/models/cv/classification/efficientnet_v2/ixrt/README.md @@ -10,7 +10,8 @@ incorporates a series of enhancement strategies to further boost performance. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_v2_s/igie/README.md b/models/cv/classification/efficientnet_v2_s/igie/README.md index 8a8fa2faee8ba23aad2dfa8a0d7412b5b60cbcbb..3173a515cf0da6f0537a3f062c93c18695e5cd9a 100644 --- a/models/cv/classification/efficientnet_v2_s/igie/README.md +++ b/models/cv/classification/efficientnet_v2_s/igie/README.md @@ -8,7 +8,8 @@ EfficientNetV2 S is an optimized model in the EfficientNetV2 series, which was d | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_v2_s/ixrt/README.md b/models/cv/classification/efficientnet_v2_s/ixrt/README.md index 171fed9414f0db6bc56737756c5d56e50b5573a6..bf9a90eeb1e899472ed14e921999140da03f4890 100644 --- a/models/cv/classification/efficientnet_v2_s/ixrt/README.md +++ b/models/cv/classification/efficientnet_v2_s/ixrt/README.md @@ -8,7 +8,8 @@ EfficientNetV2 S is an optimized model in the EfficientNetV2 series, which was d | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnetv2_rw_t/igie/README.md b/models/cv/classification/efficientnetv2_rw_t/igie/README.md index 3239cf65ce512844d7c41c65884debea20410ced..393d38c784110dbdaf70ff7654832870afac877c 100644 --- a/models/cv/classification/efficientnetv2_rw_t/igie/README.md +++ b/models/cv/classification/efficientnetv2_rw_t/igie/README.md @@ -8,7 +8,8 @@ EfficientNetV2_rw_t is an enhanced version of the EfficientNet family of convolu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnetv2_rw_t/ixrt/README.md b/models/cv/classification/efficientnetv2_rw_t/ixrt/README.md index 8d17a94c5a049969fbf19f666d71229e2c21efe8..b97f0dd19281ef5d6de313205c06871111b786df 100644 --- a/models/cv/classification/efficientnetv2_rw_t/ixrt/README.md +++ b/models/cv/classification/efficientnetv2_rw_t/ixrt/README.md @@ -8,7 +8,8 @@ EfficientNetV2_rw_t is an enhanced version of the EfficientNet family of convolu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/googlenet/igie/README.md b/models/cv/classification/googlenet/igie/README.md index 05a5df13069587a1175e1c014473b01b7e7e99e3..f822a23112ad9b0eb65c301161b4da5ba456a12a 100644 --- a/models/cv/classification/googlenet/igie/README.md +++ b/models/cv/classification/googlenet/igie/README.md @@ -8,7 +8,8 @@ Introduced in 2014, GoogleNet revolutionized image classification models by intr | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/googlenet/ixrt/README.md b/models/cv/classification/googlenet/ixrt/README.md index 4b2f093522d987acf4d28b962c9777bfb49e187f..252bc958b953cd234ed5bc23533ef210776e89bc 100644 --- a/models/cv/classification/googlenet/ixrt/README.md +++ b/models/cv/classification/googlenet/ixrt/README.md @@ -8,7 +8,8 @@ GoogLeNet is a type of convolutional neural network based on the Inception archi | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/hrnet_w18/igie/README.md b/models/cv/classification/hrnet_w18/igie/README.md index e69d58409218c4126b8892c8baa87c01f4823c63..c7aae72b182dfd20f3a272f1fb8eb369887e0a49 100644 --- a/models/cv/classification/hrnet_w18/igie/README.md +++ b/models/cv/classification/hrnet_w18/igie/README.md @@ -8,7 +8,8 @@ HRNet, short for High-Resolution Network, presents a paradigm shift in handling | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/hrnet_w18/ixrt/README.md b/models/cv/classification/hrnet_w18/ixrt/README.md index c4d4bb1e0506604a69bc8b1cc88e0556efce3097..0d121c8b5a94639f6921a7eb6e382401331fb523 100644 --- a/models/cv/classification/hrnet_w18/ixrt/README.md +++ b/models/cv/classification/hrnet_w18/ixrt/README.md @@ -8,7 +8,8 @@ HRNet-W18 is a powerful image classification model developed by Jingdong AI Rese | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/inception_resnet_v2/ixrt/README.md b/models/cv/classification/inception_resnet_v2/ixrt/README.md index d288522e294e634b2fbe981b78b9f1115c5ad50d..60a8c5f5ae48eed7617ad3626643b57cc8139ff8 100755 --- a/models/cv/classification/inception_resnet_v2/ixrt/README.md +++ b/models/cv/classification/inception_resnet_v2/ixrt/README.md @@ -8,7 +8,8 @@ Inception-ResNet-V2 is a deep learning model proposed by Google in 2016, which c | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/inception_v3/igie/README.md b/models/cv/classification/inception_v3/igie/README.md index c04e865f892effed9a372a516bed14a2e0ea538d..fea4028a98b8d4bc237cb481df05fb825ecdffe1 100644 --- a/models/cv/classification/inception_v3/igie/README.md +++ b/models/cv/classification/inception_v3/igie/README.md @@ -8,7 +8,8 @@ Inception v3 is a convolutional neural network architecture designed for image r | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/inception_v3/ixrt/README.md b/models/cv/classification/inception_v3/ixrt/README.md index 8807f231375a8188a1c6e4439e8800407061559a..5fd218dfa88baf8d9e4fdc13204d7f94c1c195da 100755 --- a/models/cv/classification/inception_v3/ixrt/README.md +++ b/models/cv/classification/inception_v3/ixrt/README.md @@ -8,7 +8,8 @@ Inception v3 is a convolutional neural network architecture designed for image r | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/mlp_mixer_base/igie/README.md b/models/cv/classification/mlp_mixer_base/igie/README.md index de3fadf47577d6ca67b1b91724f01a69d52b53ca..69d89ac0c1f29dcec2b4a0e45f7538a504c45f97 100644 --- a/models/cv/classification/mlp_mixer_base/igie/README.md +++ b/models/cv/classification/mlp_mixer_base/igie/README.md @@ -8,7 +8,8 @@ MLP-Mixer Base is a foundational model in the MLP-Mixer family, designed to use | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/mnasnet0_5/igie/README.md b/models/cv/classification/mnasnet0_5/igie/README.md index 4847f2ce7e1f2f5315f16ddbf79720f3ac0295df..5c00672138f3014dd1d734004943d4ffbf349c65 100644 --- a/models/cv/classification/mnasnet0_5/igie/README.md +++ b/models/cv/classification/mnasnet0_5/igie/README.md @@ -8,7 +8,8 @@ MNASNet0_5 is a neural network architecture optimized for mobile devices, design | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/mnasnet0_75/igie/README.md b/models/cv/classification/mnasnet0_75/igie/README.md index 12bf5601fb392106255b4364e8bc88d8fd738466..5893fc9cdef05f65e36aeeaa63f9c74951d2b5c8 100644 --- a/models/cv/classification/mnasnet0_75/igie/README.md +++ b/models/cv/classification/mnasnet0_75/igie/README.md @@ -8,7 +8,8 @@ MNASNet0_75 is a lightweight convolutional neural network designed for mobile de | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/mnasnet1_0/igie/README.md b/models/cv/classification/mnasnet1_0/igie/README.md index cde6a03332c8df5ced8bb9d414d0e0a548ded5d7..65e2c47a6b96a4c572fc5a9d44a4901a5a8ac600 100644 --- a/models/cv/classification/mnasnet1_0/igie/README.md +++ b/models/cv/classification/mnasnet1_0/igie/README.md @@ -8,7 +8,8 @@ | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/mobilenet_v2/igie/README.md b/models/cv/classification/mobilenet_v2/igie/README.md index ee928c6ef5a85d15dacd05ea8a985daac5dccf31..4b833dcc68725fb576a6a5d13990427ac20f7b3c 100644 --- a/models/cv/classification/mobilenet_v2/igie/README.md +++ b/models/cv/classification/mobilenet_v2/igie/README.md @@ -8,7 +8,8 @@ MobileNetV2 is an improvement on V1. Its new ideas include Linear Bottleneck and | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/mobilenet_v2/ixrt/README.md b/models/cv/classification/mobilenet_v2/ixrt/README.md index f702504c911ba2564f153fbfee52faf9f26f1599..e6c658cbfa149370d84d92ebd8174cca11802ba2 100644 --- a/models/cv/classification/mobilenet_v2/ixrt/README.md +++ b/models/cv/classification/mobilenet_v2/ixrt/README.md @@ -8,7 +8,8 @@ The MobileNetV2 architecture is based on an inverted residual structure where th | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/mobilenet_v3/igie/README.md b/models/cv/classification/mobilenet_v3/igie/README.md index a85288d9057fbe81f7b4dcbcbba05ef1a0423aa2..ca6113074b3b7b4e21ec0447d4c67e561d1d494b 100644 --- a/models/cv/classification/mobilenet_v3/igie/README.md +++ b/models/cv/classification/mobilenet_v3/igie/README.md @@ -8,7 +8,8 @@ MobileNetV3_Small is a lightweight convolutional neural network architecture des | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/mobilenet_v3/ixrt/README.md b/models/cv/classification/mobilenet_v3/ixrt/README.md index 6827ec6965f3df71b9e672b49985eca5276c558e..149bed834b9f038120e062d282b7068bfbb14263 100644 --- a/models/cv/classification/mobilenet_v3/ixrt/README.md +++ b/models/cv/classification/mobilenet_v3/ixrt/README.md @@ -8,7 +8,8 @@ MobileNetV3 is a convolutional neural network that is tuned to mobile phone CPUs | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/mobilenet_v3_large/igie/README.md b/models/cv/classification/mobilenet_v3_large/igie/README.md index 116b9169a9162c257ef0fee9e919208e1b8fc85b..df08d4fbeaf17008778dc3d245305e113efac02a 100644 --- a/models/cv/classification/mobilenet_v3_large/igie/README.md +++ b/models/cv/classification/mobilenet_v3_large/igie/README.md @@ -8,7 +8,8 @@ MobileNetV3_Large builds upon the success of its predecessors by incorporating s | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/regnet_x_16gf/igie/README.md b/models/cv/classification/regnet_x_16gf/igie/README.md index a2cc24e58b432a0299c6ecf22e37072e030c2cad..0b037c5c5e97a3c74d482725a9126aa5511d1a59 100644 --- a/models/cv/classification/regnet_x_16gf/igie/README.md +++ b/models/cv/classification/regnet_x_16gf/igie/README.md @@ -9,7 +9,8 @@ RegNet_x_16gf is a deep convolutional neural network from the RegNet family, int | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/regnet_x_1_6gf/igie/README.md b/models/cv/classification/regnet_x_1_6gf/igie/README.md index 73aaa51e98aa5baa5c7c773263f8cd2a576a245f..81b7e39d28476ac0959f19acb00de6e9cc887e48 100644 --- a/models/cv/classification/regnet_x_1_6gf/igie/README.md +++ b/models/cv/classification/regnet_x_1_6gf/igie/README.md @@ -8,7 +8,8 @@ RegNet is a family of models designed for image classification tasks, as describ | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/regnet_x_3_2gf/igie/README.md b/models/cv/classification/regnet_x_3_2gf/igie/README.md index 8290fbab29cd780b400cf888477ad88411bf32d5..1875b10281331f0275eb57debf8a35913325d5bf 100644 --- a/models/cv/classification/regnet_x_3_2gf/igie/README.md +++ b/models/cv/classification/regnet_x_3_2gf/igie/README.md @@ -8,7 +8,8 @@ RegNet_x_3_2gf is a model from the RegNet series, inspired by the paper *Designi | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/regnet_y_16gf/igie/README.md b/models/cv/classification/regnet_y_16gf/igie/README.md index a9c573f88bb2d4125f17f7042821c0e0e65c6c75..be7585b6530a107ef9572c530a39d0e29aa51bac 100644 --- a/models/cv/classification/regnet_y_16gf/igie/README.md +++ b/models/cv/classification/regnet_y_16gf/igie/README.md @@ -9,7 +9,8 @@ RegNet_y_16gf is an efficient convolutional neural network model in the RegNet f | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/regnet_y_1_6gf/igie/README.md b/models/cv/classification/regnet_y_1_6gf/igie/README.md index 2151fea729fa3bdc37f37fe55afbf3beeae0134d..8fc8fc99ffc274c68842100093731592ffd3e2ec 100644 --- a/models/cv/classification/regnet_y_1_6gf/igie/README.md +++ b/models/cv/classification/regnet_y_1_6gf/igie/README.md @@ -8,7 +8,8 @@ RegNet is a family of models designed for image classification tasks, as describ | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/repvgg/igie/README.md b/models/cv/classification/repvgg/igie/README.md index 6ee9e4f4e2e530ad34a2873ec33224b8d574001e..9e4368ff62b05aefb57ee9996d7cec89695f987a 100644 --- a/models/cv/classification/repvgg/igie/README.md +++ b/models/cv/classification/repvgg/igie/README.md @@ -8,7 +8,8 @@ RepVGG is an innovative convolutional neural network architecture that combines | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/repvgg/ixrt/README.md b/models/cv/classification/repvgg/ixrt/README.md index 8ec55d1d31883f35bf0d911d0a2fb16f90d5c377..897b3375f24cf5aca09075abfdb6b1a504d70512 100644 --- a/models/cv/classification/repvgg/ixrt/README.md +++ b/models/cv/classification/repvgg/ixrt/README.md @@ -9,7 +9,8 @@ It was developed by researchers at the University of Oxford and introduced in th | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/res2net50/igie/README.md b/models/cv/classification/res2net50/igie/README.md index 71f6bd9550ce568cd477b3b756e04f6ab39143d8..fc857f031e94858890441a5039e9e6714e9d72ac 100644 --- a/models/cv/classification/res2net50/igie/README.md +++ b/models/cv/classification/res2net50/igie/README.md @@ -8,7 +8,8 @@ Res2Net50 is a convolutional neural network architecture that introduces the con | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/res2net50/ixrt/README.md b/models/cv/classification/res2net50/ixrt/README.md index fc0c52be8095d6b7ffaa48db7cb88137474a4e23..b0ce62ca923383c077051c4a42d8624edd784c20 100644 --- a/models/cv/classification/res2net50/ixrt/README.md +++ b/models/cv/classification/res2net50/ixrt/README.md @@ -8,7 +8,8 @@ A novel building block for CNNs, namely Res2Net, by constructing hierarchical re | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnest50/igie/README.md b/models/cv/classification/resnest50/igie/README.md index c2312862255b94ed4596452b715405daf4920745..fdf969b6b4122cf1f22331843cd927c33c61dab5 100644 --- a/models/cv/classification/resnest50/igie/README.md +++ b/models/cv/classification/resnest50/igie/README.md @@ -8,7 +8,8 @@ ResNeSt50 is a deep convolutional neural network model based on the ResNeSt arch | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnet101/igie/README.md b/models/cv/classification/resnet101/igie/README.md index 762ade7f7a402530b8e2951358123d4cbdf3818f..94d285ea6d79e6aa1520988a3cb22c706431d00d 100644 --- a/models/cv/classification/resnet101/igie/README.md +++ b/models/cv/classification/resnet101/igie/README.md @@ -8,7 +8,8 @@ ResNet101 is a convolutional neural network architecture that belongs to the Res | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnet101/ixrt/README.md b/models/cv/classification/resnet101/ixrt/README.md index e141848785725cd9c1374ee276b1a3308c47df61..8b91c5b52d5d54ba7182b3f4bd3db20f277a9140 100644 --- a/models/cv/classification/resnet101/ixrt/README.md +++ b/models/cv/classification/resnet101/ixrt/README.md @@ -8,7 +8,8 @@ ResNet-101 is a variant of the ResNet (Residual Network) architecture, and it be | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnet152/igie/README.md b/models/cv/classification/resnet152/igie/README.md index 5f83f33a2beb48d58eee2977f90da65eee5faca3..0aed19d5db5594ba2790f6ade2eb8114a95269b6 100644 --- a/models/cv/classification/resnet152/igie/README.md +++ b/models/cv/classification/resnet152/igie/README.md @@ -8,7 +8,8 @@ ResNet152 is a convolutional neural network architecture that is part of the Res | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnet18/igie/README.md b/models/cv/classification/resnet18/igie/README.md index 85adc79da0e7cb03b32718e357a01726ed0645b2..f3ef4f569c97f3e16b9fc87da6ba0f15fe04d956 100644 --- a/models/cv/classification/resnet18/igie/README.md +++ b/models/cv/classification/resnet18/igie/README.md @@ -8,7 +8,8 @@ ResNet-18 is a relatively compact deep neural network.The ResNet-18 architecture | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnet18/ixrt/README.md b/models/cv/classification/resnet18/ixrt/README.md index 70d3786b2d0bce449ba6da5181dd6bd74805722f..bfb0d4b37ef2b6bd1409fe8528aebc098544f0ba 100644 --- a/models/cv/classification/resnet18/ixrt/README.md +++ b/models/cv/classification/resnet18/ixrt/README.md @@ -8,7 +8,8 @@ ResNet-18 is a variant of the ResNet (Residual Network) architecture, which was | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnet34/ixrt/README.md b/models/cv/classification/resnet34/ixrt/README.md index 3b7956bb324c6ab5dce41233c5aa0edaf31c19f3..fc63548e173fd277709c8f2158fa73307005477f 100644 --- a/models/cv/classification/resnet34/ixrt/README.md +++ b/models/cv/classification/resnet34/ixrt/README.md @@ -8,7 +8,8 @@ Residual Networks, or ResNets, learn residual functions with reference to the la | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnet50/igie/README.md b/models/cv/classification/resnet50/igie/README.md index 55fb8db2c8b68e889436d67a329756b439e74150..c6f43bf6880acc5e6b565a55ebeb14856f43b2f0 100644 --- a/models/cv/classification/resnet50/igie/README.md +++ b/models/cv/classification/resnet50/igie/README.md @@ -8,7 +8,8 @@ ResNet-50 is a convolutional neural network architecture that belongs to the Res | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnet50/ixrt/README.md b/models/cv/classification/resnet50/ixrt/README.md index f9db5f1f38f64c22055afc866ead5a90b75015ca..c4edfe1d96d4fb3ddf1567953fb8482bf20014e8 100644 --- a/models/cv/classification/resnet50/ixrt/README.md +++ b/models/cv/classification/resnet50/ixrt/README.md @@ -8,7 +8,8 @@ Residual Networks, or ResNets, learn residual functions with reference to the la | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnetv1d50/igie/README.md b/models/cv/classification/resnetv1d50/igie/README.md index 52d32a70ba88d728820016b317698e0e41b12b04..b65e89d3ce569ec8af9cda6618918fd171016c65 100644 --- a/models/cv/classification/resnetv1d50/igie/README.md +++ b/models/cv/classification/resnetv1d50/igie/README.md @@ -8,7 +8,8 @@ ResNetV1D50 is an enhanced version of ResNetV1-50 that incorporates changes like | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnetv1d50/ixrt/README.md b/models/cv/classification/resnetv1d50/ixrt/README.md index 9a8d945de7190080c83437591649145961c7eecb..5f85b4e15860a4cd78afa91faacb5487083800dc 100644 --- a/models/cv/classification/resnetv1d50/ixrt/README.md +++ b/models/cv/classification/resnetv1d50/ixrt/README.md @@ -8,7 +8,8 @@ Residual Networks, or ResNets, learn residual functions with reference to the la | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnext101_32x8d/igie/README.md b/models/cv/classification/resnext101_32x8d/igie/README.md index d2e6b25dc658dc89c144de8629150d946b259c2a..5215d528cccc770599fa6c5fccbcc89489e1e56f 100644 --- a/models/cv/classification/resnext101_32x8d/igie/README.md +++ b/models/cv/classification/resnext101_32x8d/igie/README.md @@ -8,7 +8,8 @@ ResNeXt101_32x8d is a deep convolutional neural network introduced in the paper | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnext101_32x8d/ixrt/README.md b/models/cv/classification/resnext101_32x8d/ixrt/README.md index 5859d9157db83776129c5ecdd2a36814726ffbae..84a63be8902aeaed6169113a304ce916a3f14525 100644 --- a/models/cv/classification/resnext101_32x8d/ixrt/README.md +++ b/models/cv/classification/resnext101_32x8d/ixrt/README.md @@ -12,7 +12,8 @@ This design improves feature extraction while maintaining computational efficien | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/resnext101_64x4d/igie/README.md b/models/cv/classification/resnext101_64x4d/igie/README.md index 22ff3449c5b8ed37c6319c883dae6a722d4d569b..468eff2dc1ea7855b9b479d66ebd13f8df9b9e94 100644 --- a/models/cv/classification/resnext101_64x4d/igie/README.md +++ b/models/cv/classification/resnext101_64x4d/igie/README.md @@ -8,7 +8,8 @@ The ResNeXt101_64x4d is a deep learning model based on the deep residual network | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnext101_64x4d/ixrt/README.md b/models/cv/classification/resnext101_64x4d/ixrt/README.md index cc6474908af271a1dbc23e5b77a4d712121daa57..503d284764e443ff2563ad8df0e81435bdc79128 100644 --- a/models/cv/classification/resnext101_64x4d/ixrt/README.md +++ b/models/cv/classification/resnext101_64x4d/ixrt/README.md @@ -11,7 +11,8 @@ various input sizes | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnext50_32x4d/igie/README.md b/models/cv/classification/resnext50_32x4d/igie/README.md index 5fb0b5544a4654c2cf455404785f9b929675a196..1cf64e6fd9c52dd4bfe97d27a5ef0b1e09fe3e6b 100644 --- a/models/cv/classification/resnext50_32x4d/igie/README.md +++ b/models/cv/classification/resnext50_32x4d/igie/README.md @@ -8,7 +8,8 @@ The ResNeXt50_32x4d model is a convolutional neural network architecture designe | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnext50_32x4d/ixrt/README.md b/models/cv/classification/resnext50_32x4d/ixrt/README.md index 75e4bd20e9cff512e848a1faf8453148b50ea38e..da34629261ffd2b112a2fd3c6e58253d0f878f10 100644 --- a/models/cv/classification/resnext50_32x4d/ixrt/README.md +++ b/models/cv/classification/resnext50_32x4d/ixrt/README.md @@ -8,7 +8,8 @@ The ResNeXt50_32x4d model is a convolutional neural network architecture designe | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/seresnet50/igie/README.md b/models/cv/classification/seresnet50/igie/README.md index 3642b2e58f87d5670d4bd69694fdf0ff1157d7dc..46b63f84da6c8107b81ed1e1547e2a907cc22f8e 100644 --- a/models/cv/classification/seresnet50/igie/README.md +++ b/models/cv/classification/seresnet50/igie/README.md @@ -8,7 +8,8 @@ SEResNet50 is an enhanced version of the ResNet50 network integrated with Squeez | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/shufflenet_v1/ixrt/README.md b/models/cv/classification/shufflenet_v1/ixrt/README.md index fa0203730e3abf05809200e8878bc1f61592a2f8..9417776f24dac06eba2b30ff39fb3326afda1dbe 100644 --- a/models/cv/classification/shufflenet_v1/ixrt/README.md +++ b/models/cv/classification/shufflenet_v1/ixrt/README.md @@ -9,7 +9,8 @@ It uses techniques such as deep separable convolution and channel shuffle to red | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/shufflenetv2_x0_5/igie/README.md b/models/cv/classification/shufflenetv2_x0_5/igie/README.md index d5e56128ab6a681701b3deb8a1edd805153b23d8..9844f7135176facdaf3e8c68fba8373481936d9b 100644 --- a/models/cv/classification/shufflenetv2_x0_5/igie/README.md +++ b/models/cv/classification/shufflenetv2_x0_5/igie/README.md @@ -10,7 +10,8 @@ convolutions, and efficient building blocks to further reduce computational comp | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/shufflenetv2_x0_5/ixrt/README.md b/models/cv/classification/shufflenetv2_x0_5/ixrt/README.md index efd845955b05a6e5718aeb589af8f2100d563e69..dc1d42892ef945218aacc3774b20a810e36a482d 100644 --- a/models/cv/classification/shufflenetv2_x0_5/ixrt/README.md +++ b/models/cv/classification/shufflenetv2_x0_5/ixrt/README.md @@ -10,7 +10,8 @@ convolutions, and efficient building blocks to further reduce computational comp | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/shufflenetv2_x1_0/igie/README.md b/models/cv/classification/shufflenetv2_x1_0/igie/README.md index 3e43cf66014fa30549666fcf24a9fc1f73034771..0f0ad384d10ac87b37e39fbdcc55a855e512a3b2 100644 --- a/models/cv/classification/shufflenetv2_x1_0/igie/README.md +++ b/models/cv/classification/shufflenetv2_x1_0/igie/README.md @@ -8,7 +8,8 @@ ShuffleNet V2_x1_0 is an efficient convolutional neural network (CNN) architectu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/shufflenetv2_x1_0/ixrt/README.md b/models/cv/classification/shufflenetv2_x1_0/ixrt/README.md index b2fd0085eb927d475470170a20a1482d78d98f03..5122bd333e0a0823d61bc2899b0505445dd3d600 100644 --- a/models/cv/classification/shufflenetv2_x1_0/ixrt/README.md +++ b/models/cv/classification/shufflenetv2_x1_0/ixrt/README.md @@ -8,7 +8,8 @@ ShuffleNet V2_x1_0 is an efficient convolutional neural network (CNN) architectu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/shufflenetv2_x1_5/igie/README.md b/models/cv/classification/shufflenetv2_x1_5/igie/README.md index 6eb085a6cb6b3417c04e3c86dfdb761729c1c8fc..d71cd25fe59b32721e71872875422c595b6a1a67 100644 --- a/models/cv/classification/shufflenetv2_x1_5/igie/README.md +++ b/models/cv/classification/shufflenetv2_x1_5/igie/README.md @@ -8,7 +8,8 @@ ShuffleNetV2_x1_5 is a lightweight convolutional neural network specifically des | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/shufflenetv2_x1_5/ixrt/README.md b/models/cv/classification/shufflenetv2_x1_5/ixrt/README.md index 34bb7cbe77bfa1eeae9a59bebd3cfdac7e5de070..c10d0a2c861ef77dd7bb8888ef178e3b5ff2c58a 100644 --- a/models/cv/classification/shufflenetv2_x1_5/ixrt/README.md +++ b/models/cv/classification/shufflenetv2_x1_5/ixrt/README.md @@ -8,7 +8,8 @@ ShuffleNetV2_x1_5 is a lightweight convolutional neural network specifically des | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/shufflenetv2_x2_0/igie/README.md b/models/cv/classification/shufflenetv2_x2_0/igie/README.md index bfca029149aa6dd15858a582a4bb5334e6b20eaf..3839929b89b4898a21eb6e6fa8f58ab8ec290d87 100644 --- a/models/cv/classification/shufflenetv2_x2_0/igie/README.md +++ b/models/cv/classification/shufflenetv2_x2_0/igie/README.md @@ -8,7 +8,8 @@ ShuffleNetV2_x2_0 is a lightweight convolutional neural network introduced in th | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/shufflenetv2_x2_0/ixrt/README.md b/models/cv/classification/shufflenetv2_x2_0/ixrt/README.md index ca8b5212b55955a670517bf45e4181b7806ba316..529e5f867135a59649d6db47c42b59546b6a19a6 100644 --- a/models/cv/classification/shufflenetv2_x2_0/ixrt/README.md +++ b/models/cv/classification/shufflenetv2_x2_0/ixrt/README.md @@ -8,7 +8,8 @@ ShuffleNetV2_x2_0 is a lightweight convolutional neural network introduced in th | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/squeezenet_v1_0/igie/README.md b/models/cv/classification/squeezenet_v1_0/igie/README.md index c5a5b2441d7dc5eb787396e32d5ce8b79c0091b2..f5840a9525eab5beded5ef76918d1d44f8b947ee 100644 --- a/models/cv/classification/squeezenet_v1_0/igie/README.md +++ b/models/cv/classification/squeezenet_v1_0/igie/README.md @@ -8,7 +8,8 @@ SqueezeNet1_0 is a lightweight convolutional neural network introduced in the pa | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/squeezenet_v1_0/ixrt/README.md b/models/cv/classification/squeezenet_v1_0/ixrt/README.md index dc92042a41d34a342a19fa3a943b04924f7fc3ac..69d79fede5c3d0095bc658dc2598fc4ae0a0a137 100644 --- a/models/cv/classification/squeezenet_v1_0/ixrt/README.md +++ b/models/cv/classification/squeezenet_v1_0/ixrt/README.md @@ -10,7 +10,8 @@ It was developed by researchers at DeepScale and released in 2016. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/squeezenet_v1_1/igie/README.md b/models/cv/classification/squeezenet_v1_1/igie/README.md index 93fec92faae527927194289e2882eb370954121a..159075720ea3f9db3330244d232acefc6b27c7e3 100644 --- a/models/cv/classification/squeezenet_v1_1/igie/README.md +++ b/models/cv/classification/squeezenet_v1_1/igie/README.md @@ -8,7 +8,8 @@ SqueezeNet 1.1 is an improved version of SqueezeNet, designed for efficient comp | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/squeezenet_v1_1/ixrt/README.md b/models/cv/classification/squeezenet_v1_1/ixrt/README.md index 6f3a4a10a5558fdce6e665d7950d2c52e0fddb60..39811e760f6d84f6dbcf6098e06ac6041bbbf62c 100644 --- a/models/cv/classification/squeezenet_v1_1/ixrt/README.md +++ b/models/cv/classification/squeezenet_v1_1/ixrt/README.md @@ -10,7 +10,8 @@ It was developed by researchers at DeepScale and released in 2016. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/svt_base/igie/README.md b/models/cv/classification/svt_base/igie/README.md index 3076aa46566b3af9d36941eaf2b8995266f2594a..a71bf6450ad09ae78b6a481e78aec552b085c275 100644 --- a/models/cv/classification/svt_base/igie/README.md +++ b/models/cv/classification/svt_base/igie/README.md @@ -8,7 +8,8 @@ SVT Base is a mid-sized variant of the Sparse Vision Transformer (SVT) series, d | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/swin_transformer/igie/README.md b/models/cv/classification/swin_transformer/igie/README.md index 06acaa7c5fbd2504d29c1e46ce05c0ed30f090b4..221562363860320bc8efa23acea330f4ec43df60 100644 --- a/models/cv/classification/swin_transformer/igie/README.md +++ b/models/cv/classification/swin_transformer/igie/README.md @@ -8,7 +8,8 @@ Swin Transformer is a pioneering neural network architecture that introduces a n | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/swin_transformer_large/ixrt/README.md b/models/cv/classification/swin_transformer_large/ixrt/README.md index 032c961d88878d66c4c830cac3b3273545d80295..fb19aa78f05fe7abad02ea29bb6db104b1b06801 100644 --- a/models/cv/classification/swin_transformer_large/ixrt/README.md +++ b/models/cv/classification/swin_transformer_large/ixrt/README.md @@ -8,7 +8,8 @@ Swin Transformer-Large is a variant of the Swin Transformer, an architecture des | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/twins_pcpvt/igie/README.md b/models/cv/classification/twins_pcpvt/igie/README.md index 2173542534a4ba4d6dd6b3b5bcfdf1f652a7b46b..078afa10b788248ea1bba44a19762a14695a3eba 100644 --- a/models/cv/classification/twins_pcpvt/igie/README.md +++ b/models/cv/classification/twins_pcpvt/igie/README.md @@ -8,7 +8,8 @@ Twins_PCPVT Small is a lightweight vision transformer model that combines pyrami | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/van_b0/igie/README.md b/models/cv/classification/van_b0/igie/README.md index 421add0edef0067c18f6f0112cabaccce9eb8039..ae92b6e8eaf940edb8d508dead93ac918cdfc1b4 100644 --- a/models/cv/classification/van_b0/igie/README.md +++ b/models/cv/classification/van_b0/igie/README.md @@ -8,7 +8,8 @@ VAN-B0 is a lightweight visual attention network that combines convolution and a | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/vgg11/igie/README.md b/models/cv/classification/vgg11/igie/README.md index 41a9ea9a41691e147422787e3716b1a3e5ae2011..f279716de176e1fb76312d1b4a0061b163f7de7f 100644 --- a/models/cv/classification/vgg11/igie/README.md +++ b/models/cv/classification/vgg11/igie/README.md @@ -8,7 +8,8 @@ VGG11 is a deep convolutional neural network introduced by the Visual Geometry G | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/vgg16/igie/README.md b/models/cv/classification/vgg16/igie/README.md index d0fcef2f39995fed8ab1574b43ea7192b34cbb62..f527f15b1091a468e2db0b602d8ee59751fb0a6e 100644 --- a/models/cv/classification/vgg16/igie/README.md +++ b/models/cv/classification/vgg16/igie/README.md @@ -8,7 +8,8 @@ VGG16 is a convolutional neural network (CNN) architecture designed for image cl | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/vgg16/ixrt/README.md b/models/cv/classification/vgg16/ixrt/README.md index 589d4a724fe9375de72e57f338d08ce70114c921..c763274c12f9d6ad436eca587cfd585db472a40d 100644 --- a/models/cv/classification/vgg16/ixrt/README.md +++ b/models/cv/classification/vgg16/ixrt/README.md @@ -9,7 +9,8 @@ It finished second in the 2014 ImageNet Massive Visual Identity Challenge (ILSVR | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/vgg19/igie/README.md b/models/cv/classification/vgg19/igie/README.md index b35cbb4ae8f0f9e6a89f2adb2003bfc20b697c9c..7c51ff2a2b95b342711ab8646ba228cf7d78b75e 100644 --- a/models/cv/classification/vgg19/igie/README.md +++ b/models/cv/classification/vgg19/igie/README.md @@ -8,7 +8,8 @@ VGG19 is a member of the VGG network family, proposed by the Visual Geometry Gro | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/vgg19_bn/igie/README.md b/models/cv/classification/vgg19_bn/igie/README.md index 420ca505ff5c2749ff7020aeb95f905ed19dba2b..6a1bbec4cbef61e5422cc5651e2537ab26872cc7 100644 --- a/models/cv/classification/vgg19_bn/igie/README.md +++ b/models/cv/classification/vgg19_bn/igie/README.md @@ -8,7 +8,8 @@ VGG19_BN is a variant of the VGG network, based on VGG19 with the addition of Ba | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/vit/igie/README.md b/models/cv/classification/vit/igie/README.md index 45b3ddd0dcb4930a9152907c8f9b82c2867df9ca..ac909a266f8a91c5b5bfc0d5965283eaf15017f6 100644 --- a/models/cv/classification/vit/igie/README.md +++ b/models/cv/classification/vit/igie/README.md @@ -8,7 +8,8 @@ ViT is a novel vision model architecture proposed by Google in the paper *An Ima | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/wide_resnet101/igie/README.md b/models/cv/classification/wide_resnet101/igie/README.md index 7abc2c6452965a76aa2ae379a55be12ce96b7372..24410f6d1bf388d4977a8e05699107b50cc9f3ea 100644 --- a/models/cv/classification/wide_resnet101/igie/README.md +++ b/models/cv/classification/wide_resnet101/igie/README.md @@ -8,7 +8,8 @@ Wide ResNet101 is a variant of the ResNet architecture that focuses on increasin | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/wide_resnet50/igie/README.md b/models/cv/classification/wide_resnet50/igie/README.md index 695cc866aa006ba07f8a35eb8a03edd4359dbae3..b175fb20a3fbef9cba16522039a2fcc486e2c362 100644 --- a/models/cv/classification/wide_resnet50/igie/README.md +++ b/models/cv/classification/wide_resnet50/igie/README.md @@ -8,7 +8,8 @@ The distinguishing feature of Wide ResNet50 lies in its widened architecture com | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/wide_resnet50/ixrt/README.md b/models/cv/classification/wide_resnet50/ixrt/README.md index 52220b4ce55bb443087daa9184560701f7938ebf..7608697fe78024157026ac73fbfe6587d81df010 100644 --- a/models/cv/classification/wide_resnet50/ixrt/README.md +++ b/models/cv/classification/wide_resnet50/ixrt/README.md @@ -8,7 +8,8 @@ The distinguishing feature of Wide ResNet50 lies in its widened architecture com | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/face_recognition/facenet/ixrt/README.md b/models/cv/face_recognition/facenet/ixrt/README.md index cd58c9b9e3ceeb57d077729b7afc781093fb442c..c44213ad790130b95519466ab2272f219b4c5768 100644 --- a/models/cv/face_recognition/facenet/ixrt/README.md +++ b/models/cv/face_recognition/facenet/ixrt/README.md @@ -8,7 +8,8 @@ Facenet is a facial recognition system originally proposed and developed by Goog | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/instance_segmentation/solov1/ixrt/README.md b/models/cv/instance_segmentation/solov1/ixrt/README.md index 62ab78a094b1795dcf6831d18a62ef02492513bb..e6af2d2389df6cefba66c48bde4c95520074f762 100644 --- a/models/cv/instance_segmentation/solov1/ixrt/README.md +++ b/models/cv/instance_segmentation/solov1/ixrt/README.md @@ -8,7 +8,8 @@ SOLO (Segmenting Objects by Locations) is a new instance segmentation method tha | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/multi_object_tracking/deepsort/igie/README.md b/models/cv/multi_object_tracking/deepsort/igie/README.md index 9988aa78997cae7e70e37793f52fb71ceb8c110d..3eb067d8e377403d2f7932ebb5d1480d7e61d094 100644 --- a/models/cv/multi_object_tracking/deepsort/igie/README.md +++ b/models/cv/multi_object_tracking/deepsort/igie/README.md @@ -8,7 +8,8 @@ DeepSort integrates deep neural networks with traditional tracking methods to ac | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/multi_object_tracking/fastreid/igie/README.md b/models/cv/multi_object_tracking/fastreid/igie/README.md index c08f78531d8e3b3e841aa76620ff840518e723a8..6e9a7f7fc595f3eddbc22ffbddc8d414cc3e9d3c 100644 --- a/models/cv/multi_object_tracking/fastreid/igie/README.md +++ b/models/cv/multi_object_tracking/fastreid/igie/README.md @@ -8,7 +8,8 @@ FastReID is a research platform that implements state-of-the-art re-identificati | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/multi_object_tracking/repnet/igie/README.md b/models/cv/multi_object_tracking/repnet/igie/README.md index 51625f15036522899d454833fcedb39868887c10..7b3c45e40e9dd9569dea8246e4febc0eaac42d0b 100644 --- a/models/cv/multi_object_tracking/repnet/igie/README.md +++ b/models/cv/multi_object_tracking/repnet/igie/README.md @@ -8,7 +8,8 @@ The paper "Deep Relative Distance Learning: Tell the Difference Between Similar | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/atss/igie/README.md b/models/cv/object_detection/atss/igie/README.md index 5ba1d3cce148577c2adc58196be7e83959e060ad..ddce93995d46ee0cd2c5f6df8270551f36e22817 100644 --- a/models/cv/object_detection/atss/igie/README.md +++ b/models/cv/object_detection/atss/igie/README.md @@ -8,7 +8,8 @@ ATSS is an advanced adaptive training sample selection method that effectively e | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/centernet/igie/README.md b/models/cv/object_detection/centernet/igie/README.md index 54316c9f498919cdceb72faf491e3ecf61e1e383..25115bea4e679525673ac1936b641ddc0fdc2810 100644 --- a/models/cv/object_detection/centernet/igie/README.md +++ b/models/cv/object_detection/centernet/igie/README.md @@ -8,7 +8,8 @@ CenterNet is an efficient object detection model that simplifies the traditional | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/centernet/ixrt/README.md b/models/cv/object_detection/centernet/ixrt/README.md index a3f4d387ab7cdcef23e8f49e6552d24d885f1037..3af16f69519afce80921ca98b28ffe51df096fa7 100644 --- a/models/cv/object_detection/centernet/ixrt/README.md +++ b/models/cv/object_detection/centernet/ixrt/README.md @@ -8,7 +8,8 @@ CenterNet is an efficient object detection model that simplifies the traditional | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/detr/ixrt/README.md b/models/cv/object_detection/detr/ixrt/README.md index 25285303931e5c8f143624e81289451c00602723..f63ee9e526f435957456fd15ef18df1e5b75f6e0 100755 --- a/models/cv/object_detection/detr/ixrt/README.md +++ b/models/cv/object_detection/detr/ixrt/README.md @@ -8,7 +8,8 @@ DETR (DEtection TRansformer) is a novel approach that views object detection as | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/fcos/igie/README.md b/models/cv/object_detection/fcos/igie/README.md index 9a57584e37961deab953f8a25f45036e0115558f..03022c1f72e5c69ff480f2a8910e4a3605efe3bf 100644 --- a/models/cv/object_detection/fcos/igie/README.md +++ b/models/cv/object_detection/fcos/igie/README.md @@ -8,7 +8,8 @@ FCOS is an innovative one-stage object detection framework that abandons traditi | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/fcos/ixrt/README.md b/models/cv/object_detection/fcos/ixrt/README.md index 721fed1552fa6acee11ec5cc089a6aefbb7614cd..da6694965fa4fa4dc185e979a9eac75cc7acec80 100755 --- a/models/cv/object_detection/fcos/ixrt/README.md +++ b/models/cv/object_detection/fcos/ixrt/README.md @@ -9,7 +9,8 @@ For more details, please refer to our [report on Arxiv](https://arxiv.org/abs/19 | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/foveabox/igie/README.md b/models/cv/object_detection/foveabox/igie/README.md index 48b40e882a19d4a3e3dff5b218283798a02633cd..de20d6a4a40ca3d87915702aa4a7d3a63b3ec314 100644 --- a/models/cv/object_detection/foveabox/igie/README.md +++ b/models/cv/object_detection/foveabox/igie/README.md @@ -8,7 +8,8 @@ FoveaBox is an advanced anchor-free object detection framework that enhances acc | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/foveabox/ixrt/README.md b/models/cv/object_detection/foveabox/ixrt/README.md index b9dd175c8cf34d08550525951a93c278e123214e..b9cfed86efe2f04dbacb6a3de911e4edf095cd50 100644 --- a/models/cv/object_detection/foveabox/ixrt/README.md +++ b/models/cv/object_detection/foveabox/ixrt/README.md @@ -8,7 +8,8 @@ FoveaBox is an advanced anchor-free object detection framework that enhances acc | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/fsaf/igie/README.md b/models/cv/object_detection/fsaf/igie/README.md index a1fb8a1db01b53e55ba821de065da8f8b72a6a5b..bb6c0e592b241de6bbe3c936c10ede85a5070cb7 100644 --- a/models/cv/object_detection/fsaf/igie/README.md +++ b/models/cv/object_detection/fsaf/igie/README.md @@ -8,7 +8,8 @@ The FSAF (Feature Selective Anchor-Free) module is an innovative component for s | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/fsaf/ixrt/README.md b/models/cv/object_detection/fsaf/ixrt/README.md index 7a6d304c0aef8d72edc70ef6b3a4bd512418f7e2..e4250d3ef89c8d77e1831b6927c5038ec66a117c 100644 --- a/models/cv/object_detection/fsaf/ixrt/README.md +++ b/models/cv/object_detection/fsaf/ixrt/README.md @@ -8,7 +8,8 @@ The FSAF (Feature Selective Anchor-Free) module is an innovative component for s | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/gfl/igie/README.md b/models/cv/object_detection/gfl/igie/README.md index d4174cd2dec5a5e768ad0680ed0c7f0a0a0eacbb..4f64c7b937fc33089d26ab1458f03b280dad7862 100644 --- a/models/cv/object_detection/gfl/igie/README.md +++ b/models/cv/object_detection/gfl/igie/README.md @@ -8,7 +8,8 @@ GFL (Generalized Focal Loss) is an object detection model that utilizes an impro | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/object_detection/hrnet/igie/README.md b/models/cv/object_detection/hrnet/igie/README.md index 3e584a92141fa040499501f13557569202daf52d..8c1a8be33e326eaeaa0d98771efe12238d89278c 100644 --- a/models/cv/object_detection/hrnet/igie/README.md +++ b/models/cv/object_detection/hrnet/igie/README.md @@ -8,7 +8,8 @@ HRNet is an advanced deep learning architecture for human pose estimation, chara | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/hrnet/ixrt/README.md b/models/cv/object_detection/hrnet/ixrt/README.md index f0b27f07b0917f5f3c41a2a72f8c60e7d21bbeae..41626061d4129738c09f33d48635922a4ca691a0 100644 --- a/models/cv/object_detection/hrnet/ixrt/README.md +++ b/models/cv/object_detection/hrnet/ixrt/README.md @@ -8,7 +8,8 @@ HRNet is an advanced deep learning architecture for human pose estimation, chara | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/paa/igie/README.md b/models/cv/object_detection/paa/igie/README.md index 9f19fc8d6ea44e5eee90bba3a36edc6f9956b913..df48de68076fee67d303369f778cc5b9277efeba 100644 --- a/models/cv/object_detection/paa/igie/README.md +++ b/models/cv/object_detection/paa/igie/README.md @@ -8,7 +8,8 @@ PAA (Probabilistic Anchor Assignment) is an algorithm for object detection that | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/retinaface/igie/README.md b/models/cv/object_detection/retinaface/igie/README.md index 1c4d302872a2bef01acaab0bd438213cd7e7ac66..7f354187accd4e59d49567b8403a46e21c17798f 100755 --- a/models/cv/object_detection/retinaface/igie/README.md +++ b/models/cv/object_detection/retinaface/igie/README.md @@ -8,7 +8,8 @@ RetinaFace is an efficient single-stage face detection model that employs a mult | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/retinaface/ixrt/README.md b/models/cv/object_detection/retinaface/ixrt/README.md index 2323b20fe2d009e7c9ad217f858084e196a524ec..424f91839006794c28c5542b692262c1699547f7 100644 --- a/models/cv/object_detection/retinaface/ixrt/README.md +++ b/models/cv/object_detection/retinaface/ixrt/README.md @@ -8,7 +8,8 @@ RetinaFace is an efficient single-stage face detection model that employs a mult | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/retinanet/igie/README.md b/models/cv/object_detection/retinanet/igie/README.md index 08477b2385bf14542f1894a551fb4ac1fadb0d63..10c8173ef013bf4476c64edc37a703d9c5d92e86 100644 --- a/models/cv/object_detection/retinanet/igie/README.md +++ b/models/cv/object_detection/retinanet/igie/README.md @@ -8,7 +8,8 @@ RetinaNet, an innovative object detector, challenges the conventional trade-off | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/rtmdet/igie/README.md b/models/cv/object_detection/rtmdet/igie/README.md index 825b6280e2544c5a04d2a476f34e5572836a2e90..bd6db5e3c8e3a6a3dd8ab6566df0b3530c7ba45f 100644 --- a/models/cv/object_detection/rtmdet/igie/README.md +++ b/models/cv/object_detection/rtmdet/igie/README.md @@ -8,7 +8,8 @@ RTMDet, presented by the Shanghai AI Laboratory, is a novel framework for real-t | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/sabl/igie/README.md b/models/cv/object_detection/sabl/igie/README.md index abde9655f1b4bb6690ba5ea8690a1e26d043ea2a..1fc9126b980cb780251b6deb390ed7493550c4a5 100644 --- a/models/cv/object_detection/sabl/igie/README.md +++ b/models/cv/object_detection/sabl/igie/README.md @@ -8,7 +8,8 @@ SABL (Side-Aware Boundary Localization) is an innovative approach in object dete | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov10/igie/README.md b/models/cv/object_detection/yolov10/igie/README.md index 49193820af8c7df966143e29ee0bb50216e09171..928ba8b74182601f49f2366288cb774bb7e8db5c 100644 --- a/models/cv/object_detection/yolov10/igie/README.md +++ b/models/cv/object_detection/yolov10/igie/README.md @@ -8,7 +8,8 @@ YOLOv10, built on the Ultralytics Python package by researchers at Tsinghua Univ | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov10/ixrt/README.md b/models/cv/object_detection/yolov10/ixrt/README.md index 6fade83d11496a0b3206ca89347ffb368b17ae83..c3c5da49020ebc746d32045dd4e3b5936ea1098d 100644 --- a/models/cv/object_detection/yolov10/ixrt/README.md +++ b/models/cv/object_detection/yolov10/ixrt/README.md @@ -8,7 +8,8 @@ YOLOv10, built on the Ultralytics Python package by researchers at Tsinghua Univ | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/object_detection/yolov11/igie/README.md b/models/cv/object_detection/yolov11/igie/README.md index 9bf481163efe3011f9c37167f8ba8eaf61cacc85..f3475d3905ad302d13af3a6611b4a43d78b7624a 100644 --- a/models/cv/object_detection/yolov11/igie/README.md +++ b/models/cv/object_detection/yolov11/igie/README.md @@ -8,7 +8,8 @@ YOLOv11 is the latest generation of the YOLO (You Only Look Once) series object | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov11/ixrt/README.md b/models/cv/object_detection/yolov11/ixrt/README.md index 3172be8544a51c291fd4bad291761ca88908a827..3f81a01f9e97d3b41081b994bec2ffa7c1dc95a3 100644 --- a/models/cv/object_detection/yolov11/ixrt/README.md +++ b/models/cv/object_detection/yolov11/ixrt/README.md @@ -8,7 +8,8 @@ YOLOv11 is the latest generation of the YOLO (You Only Look Once) series object | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/object_detection/yolov12/igie/README.md b/models/cv/object_detection/yolov12/igie/README.md index 6299af3a2d601dd87614aa02c29d3e89ec4e804a..d27ff639ea9a8194bde0b5e842c936614e708366 100644 --- a/models/cv/object_detection/yolov12/igie/README.md +++ b/models/cv/object_detection/yolov12/igie/README.md @@ -8,7 +8,8 @@ YOLOv12 achieves high precision and efficient real-time object detection by inte | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/object_detection/yolov3/igie/README.md b/models/cv/object_detection/yolov3/igie/README.md index d469210d900707984995f43607e854f98c7e3f84..5bce55dfde6691dedae8ac134abe2409f821884c 100644 --- a/models/cv/object_detection/yolov3/igie/README.md +++ b/models/cv/object_detection/yolov3/igie/README.md @@ -8,7 +8,8 @@ YOLOv3 is a influential object detection algorithm.The key innovation of YOLOv3 | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov3/ixrt/README.md b/models/cv/object_detection/yolov3/ixrt/README.md index b79768907e777fa54354a5f732faf426e43bcbcc..6963361c516c2893f1acf6947eebe4b7a6938d99 100644 --- a/models/cv/object_detection/yolov3/ixrt/README.md +++ b/models/cv/object_detection/yolov3/ixrt/README.md @@ -8,7 +8,8 @@ YOLOv3 is a influential object detection algorithm.The key innovation of YOLOv3 | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov4/igie/README.md b/models/cv/object_detection/yolov4/igie/README.md index c0753f517fdcb5bc44f313d9708ac3bab8282350..4ef0025476489f8e1d109cd02b52d22827fb9547 100644 --- a/models/cv/object_detection/yolov4/igie/README.md +++ b/models/cv/object_detection/yolov4/igie/README.md @@ -8,7 +8,8 @@ YOLOv4 employs a two-step process, involving regression for bounding box positio | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov4/ixrt/README.md b/models/cv/object_detection/yolov4/ixrt/README.md index f6bd831e431a4c70064a9d348d143e397bd8889e..14810cf997da5a2b2f2973d160536a4218a482c2 100644 --- a/models/cv/object_detection/yolov4/ixrt/README.md +++ b/models/cv/object_detection/yolov4/ixrt/README.md @@ -8,7 +8,8 @@ YOLOv4 employs a two-step process, involving regression for bounding box positio | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov5/igie/README.md b/models/cv/object_detection/yolov5/igie/README.md index 55f8b16242aa2a89d6e99f80c003e59a19f85669..6a8f30d9e1a699d0a156239f3aacceeaa734494e 100644 --- a/models/cv/object_detection/yolov5/igie/README.md +++ b/models/cv/object_detection/yolov5/igie/README.md @@ -8,7 +8,8 @@ The YOLOv5 architecture is designed for efficient and accurate object detection | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov5/ixrt/README.md b/models/cv/object_detection/yolov5/ixrt/README.md index 69f568404f9abe717223373f3a76746cbc48d2a7..de030adb22a8b553f96db0eafc07632fd82ebc7e 100644 --- a/models/cv/object_detection/yolov5/ixrt/README.md +++ b/models/cv/object_detection/yolov5/ixrt/README.md @@ -8,7 +8,8 @@ The YOLOv5 architecture is designed for efficient and accurate object detection | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov5s/ixrt/README.md b/models/cv/object_detection/yolov5s/ixrt/README.md index 88f55f22a1f9ed3f815bd23e6f07cbb80c37e192..1e216cfbced32a230cd7901637fd1167c0018652 100755 --- a/models/cv/object_detection/yolov5s/ixrt/README.md +++ b/models/cv/object_detection/yolov5s/ixrt/README.md @@ -8,7 +8,8 @@ The YOLOv5 architecture is designed for efficient and accurate object detection | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov6/igie/README.md b/models/cv/object_detection/yolov6/igie/README.md index 4a31e67a82ee56586411bf40cf104af9e1ad568a..bbb1ba209aac9e324984fa024ecbf236272bb4ec 100644 --- a/models/cv/object_detection/yolov6/igie/README.md +++ b/models/cv/object_detection/yolov6/igie/README.md @@ -8,7 +8,8 @@ YOLOv6 integrates cutting-edge object detection advancements from industry and a | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov6/ixrt/README.md b/models/cv/object_detection/yolov6/ixrt/README.md index 713c1f60be839649c3bc78b5deb5f816b8f53f9e..947cbc6998ad1456385fedd15c994fed9cd55046 100644 --- a/models/cv/object_detection/yolov6/ixrt/README.md +++ b/models/cv/object_detection/yolov6/ixrt/README.md @@ -8,7 +8,8 @@ YOLOv6 integrates cutting-edge object detection advancements from industry and a | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov7/igie/README.md b/models/cv/object_detection/yolov7/igie/README.md index 1a12c66165326698ae7df27cb987b4aa83ac9c4f..e5979c9c2f38284830fb4ab8ee9a9b9964e2357b 100644 --- a/models/cv/object_detection/yolov7/igie/README.md +++ b/models/cv/object_detection/yolov7/igie/README.md @@ -8,7 +8,8 @@ YOLOv7 is a state-of-the-art real-time object detector that surpasses all known | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov7/ixrt/README.md b/models/cv/object_detection/yolov7/ixrt/README.md index 8ff917cc0f74b5a72c4f60e9818a932fd24f5f87..641440a5eec66bf5e8070f19b99b12fc7bc95af0 100644 --- a/models/cv/object_detection/yolov7/ixrt/README.md +++ b/models/cv/object_detection/yolov7/ixrt/README.md @@ -8,7 +8,8 @@ YOLOv7 is an object detection model based on the YOLO (You Only Look Once) serie | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov8/igie/README.md b/models/cv/object_detection/yolov8/igie/README.md index 714cda3cbd6e3a396b7ecc7a586116071987ecd0..7b069b0bc1b2023a28e53b3511ae7641927dd17c 100644 --- a/models/cv/object_detection/yolov8/igie/README.md +++ b/models/cv/object_detection/yolov8/igie/README.md @@ -8,7 +8,8 @@ Yolov8 combines speed and accuracy in real-time object detection tasks. With a f | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov8/ixrt/README.md b/models/cv/object_detection/yolov8/ixrt/README.md index a6c0e003ca10e2184afe6cb26fe45b7b49ec62f3..f7ea4ddaaf544a4381e7a824686cf96458080945 100644 --- a/models/cv/object_detection/yolov8/ixrt/README.md +++ b/models/cv/object_detection/yolov8/ixrt/README.md @@ -8,7 +8,8 @@ Yolov8 combines speed and accuracy in real-time object detection tasks. With a f | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov9/igie/README.md b/models/cv/object_detection/yolov9/igie/README.md index e52c362995c6cfb41b34af83427adf1f579be1d9..4bec9a289a3c3943ad984a99f61f63d455478bd0 100644 --- a/models/cv/object_detection/yolov9/igie/README.md +++ b/models/cv/object_detection/yolov9/igie/README.md @@ -8,7 +8,8 @@ YOLOv9 represents a major leap in real-time object detection by introducing inno | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov9/ixrt/README.md b/models/cv/object_detection/yolov9/ixrt/README.md index 806be63ac0fbb630f0a891920453fdaa0b8a7157..b9e1b1742110b6a4b42274cc823bf87075e24ce6 100644 --- a/models/cv/object_detection/yolov9/ixrt/README.md +++ b/models/cv/object_detection/yolov9/ixrt/README.md @@ -8,7 +8,8 @@ YOLOv9 represents a major leap in real-time object detection by introducing inno | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/object_detection/yolox/igie/README.md b/models/cv/object_detection/yolox/igie/README.md index 1b0cf15f57dfa5791595ab0470a96e8340af4a37..df42731221e7bdd3510b1188c63b193e000f077f 100644 --- a/models/cv/object_detection/yolox/igie/README.md +++ b/models/cv/object_detection/yolox/igie/README.md @@ -8,7 +8,8 @@ YOLOX is an anchor-free version of YOLO, with a simpler design but better perfor | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolox/ixrt/README.md b/models/cv/object_detection/yolox/ixrt/README.md index e372ac05362681b308d19b4f82703b113976ee63..2d4fb0c872d02b09796a8cb825db3f8f44a4cfeb 100644 --- a/models/cv/object_detection/yolox/ixrt/README.md +++ b/models/cv/object_detection/yolox/ixrt/README.md @@ -9,7 +9,8 @@ For more details, please refer to our [report on Arxiv](https://arxiv.org/abs/21 | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/ocr/kie_layoutxlm/igie/README.md b/models/cv/ocr/kie_layoutxlm/igie/README.md index 5ad55dd9d57b26a9f97a2f3c7c4965a5768fb8c2..bc64cfc1b78ae20b6fe4b9a9c36bf6ef8a143bb4 100644 --- a/models/cv/ocr/kie_layoutxlm/igie/README.md +++ b/models/cv/ocr/kie_layoutxlm/igie/README.md @@ -8,7 +8,8 @@ LayoutXLM is a groundbreaking multimodal pre-trained model for multilingual docu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/ocr/svtr/igie/README.md b/models/cv/ocr/svtr/igie/README.md index f5e7ad5488ba3181d761c17baa393e5e7834349b..f9aedccdd9d4ce4b504a2e8e66c03f90b58ec5e4 100644 --- a/models/cv/ocr/svtr/igie/README.md +++ b/models/cv/ocr/svtr/igie/README.md @@ -8,7 +8,8 @@ SVTR proposes a single vision model for scene text recognition. This model compl | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/pose_estimation/hrnetpose/igie/README.md b/models/cv/pose_estimation/hrnetpose/igie/README.md index 7af12fbab8fae99b814124f9efaa7816d64685d0..bf366b347efdcb4af66c5a71d272d38089c18b3a 100644 --- a/models/cv/pose_estimation/hrnetpose/igie/README.md +++ b/models/cv/pose_estimation/hrnetpose/igie/README.md @@ -8,7 +8,8 @@ HRNetPose (High-Resolution Network for Pose Estimation) is a high-performance hu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/pose_estimation/lightweight_openpose/ixrt/README.md b/models/cv/pose_estimation/lightweight_openpose/ixrt/README.md index c784c2080af717c3f80a2d564164c3268c7279a2..54b8579a3a9906120ee5ec388d425f0e3b5aee0c 100644 --- a/models/cv/pose_estimation/lightweight_openpose/ixrt/README.md +++ b/models/cv/pose_estimation/lightweight_openpose/ixrt/README.md @@ -12,7 +12,8 @@ inference (no flip or any post-processing done). | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/pose_estimation/rtmpose/igie/README.md b/models/cv/pose_estimation/rtmpose/igie/README.md index a615bd46ddea574a33061ef1c9ab5e5ef5a9cdce..6b6a7eecea1da6ab85fab67cf4416c20b4cd673f 100644 --- a/models/cv/pose_estimation/rtmpose/igie/README.md +++ b/models/cv/pose_estimation/rtmpose/igie/README.md @@ -8,7 +8,8 @@ RTMPose, a state-of-the-art framework developed by Shanghai AI Laboratory, excel | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/pose_estimation/rtmpose/ixrt/README.md b/models/cv/pose_estimation/rtmpose/ixrt/README.md index cdd76938dbee7f98b938761cf147b974723deddc..c757624173112211edf64d6a933beed181eaaf4c 100644 --- a/models/cv/pose_estimation/rtmpose/ixrt/README.md +++ b/models/cv/pose_estimation/rtmpose/ixrt/README.md @@ -8,7 +8,8 @@ RTMPose, a state-of-the-art framework developed by Shanghai AI Laboratory, excel | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/semantic_segmentation/unet/igie/README.md b/models/cv/semantic_segmentation/unet/igie/README.md index 62220355140309bc66a2e9b416bfc3b9cb2b7565..a168b53fbe651d79203017af9f69e6e971ab6980 100644 --- a/models/cv/semantic_segmentation/unet/igie/README.md +++ b/models/cv/semantic_segmentation/unet/igie/README.md @@ -8,7 +8,8 @@ UNet is a convolutional neural network architecture for image segmentation, feat | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/multimodal/diffusion_model/stable-diffusion/diffusers/README.md b/models/multimodal/diffusion_model/stable-diffusion/diffusers/README.md index ad3e62c99ae07325efcab72a619760907e67b52e..e1bce0d1f1d57afa68c569921b64db3a59a5c230 100644 --- a/models/multimodal/diffusion_model/stable-diffusion/diffusers/README.md +++ b/models/multimodal/diffusion_model/stable-diffusion/diffusers/README.md @@ -8,7 +8,8 @@ Stable Diffusion is a latent text-to-image diffusion model capable of generating | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/aria/vllm/README.md b/models/multimodal/vision_language_model/aria/vllm/README.md index 94cdb57d5925e3d5737f06228d183e6dc0893ff6..8d4163ac0b756cb558ec9c8004b8e2ba13161dce 100644 --- a/models/multimodal/vision_language_model/aria/vllm/README.md +++ b/models/multimodal/vision_language_model/aria/vllm/README.md @@ -12,6 +12,7 @@ Aria is a multimodal native MoE model. It features: | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | vLLM | Release | | :----: | :----: | :----: | :----: | +| MR-V100 | 4.3.0 | >=0.6.4 | 25.09 | | MR-V100 | 4.2.0 | >=0.6.6 | 25.06 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/chameleon_7b/vllm/README.md b/models/multimodal/vision_language_model/chameleon_7b/vllm/README.md index 7a488b0a320202b0914a55f724627eb98eb482b6..fa903873c32d2ac5f370d94148dd7c22e2a1a5f2 100755 --- a/models/multimodal/vision_language_model/chameleon_7b/vllm/README.md +++ b/models/multimodal/vision_language_model/chameleon_7b/vllm/README.md @@ -8,7 +8,8 @@ Chameleon, an AI system that mitigates these limitations by augmenting LLMs with | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/clip/ixformer/README.md b/models/multimodal/vision_language_model/clip/ixformer/README.md index 5d20f50ce99fc5fa83ef9a0c201f39464a5fb8fa..f870b0d60ca194b1a62698a414f8de43a2ff8661 100644 --- a/models/multimodal/vision_language_model/clip/ixformer/README.md +++ b/models/multimodal/vision_language_model/clip/ixformer/README.md @@ -8,7 +8,8 @@ CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/fuyu_8b/vllm/README.md b/models/multimodal/vision_language_model/fuyu_8b/vllm/README.md index d13e0b364e215b3c4479edd6f0ee8072977f1e36..559b1399604f000a3842ce69b7a226baf6c1b1f5 100755 --- a/models/multimodal/vision_language_model/fuyu_8b/vllm/README.md +++ b/models/multimodal/vision_language_model/fuyu_8b/vllm/README.md @@ -12,7 +12,8 @@ transformer decoder like an image transformer (albeit with no pooling and causal | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/h2vol/vllm/README.md b/models/multimodal/vision_language_model/h2vol/vllm/README.md index 0013e2e7ace6aeceadd06e83196d6bdcf462275e..4d72af6c2e0578f5ad93c3fc930d054670d006dc 100644 --- a/models/multimodal/vision_language_model/h2vol/vllm/README.md +++ b/models/multimodal/vision_language_model/h2vol/vllm/README.md @@ -12,6 +12,7 @@ language tasks. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | vLLM | Release | | :----: | :----: | :----: | :----: | +| MR-V100 | 4.3.0 | >=0.6.4 | 25.09 | | MR-V100 | 4.2.0 | >=0.6.4 | 25.06 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/idefics3/vllm/README.md b/models/multimodal/vision_language_model/idefics3/vllm/README.md index 78d4117c170f1db13021db147a13bd7d87db0d5e..75b34aa1bb873553ac52c543308e2fb559ef9f83 100644 --- a/models/multimodal/vision_language_model/idefics3/vllm/README.md +++ b/models/multimodal/vision_language_model/idefics3/vllm/README.md @@ -11,6 +11,7 @@ significantly enhancing capabilities around OCR, document understanding and visu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | vLLM | Release | | :----: | :----: | :----: | :----: | +| MR-V100 | 4.3.0 | >=0.6.4 | 25.09 | | MR-V100 | 4.2.0 | >=0.6.4 | 25.06 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/intern_vl/vllm/README.md b/models/multimodal/vision_language_model/intern_vl/vllm/README.md index c337a34094d9a2c4666cb2d3126aa3f64dcccc2d..dc9d06b2e5c9ebd3395672bc5e4ddcc5aff8de13 100644 --- a/models/multimodal/vision_language_model/intern_vl/vllm/README.md +++ b/models/multimodal/vision_language_model/intern_vl/vllm/README.md @@ -11,7 +11,8 @@ learning. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/llama-3.2/vllm/README.md b/models/multimodal/vision_language_model/llama-3.2/vllm/README.md index b6aab0789255ee31da3817ea962dacbf0b797fa7..7fddcc72713d14817260703c558116490a25199d 100644 --- a/models/multimodal/vision_language_model/llama-3.2/vllm/README.md +++ b/models/multimodal/vision_language_model/llama-3.2/vllm/README.md @@ -11,7 +11,8 @@ outperform many of the available open source and closed chat models on common in | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/llava/vllm/README.md b/models/multimodal/vision_language_model/llava/vllm/README.md index 78a2119013b612c6e26f517339cf634fa1677b54..7027191f83293e204471893a58e36a7bff291131 100644 --- a/models/multimodal/vision_language_model/llava/vllm/README.md +++ b/models/multimodal/vision_language_model/llava/vllm/README.md @@ -13,7 +13,8 @@ reasoning. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/llava_next_video_7b/vllm/README.md b/models/multimodal/vision_language_model/llava_next_video_7b/vllm/README.md index 31b5622fc6e6cd7e62af94f71d20aaf0da78581b..17857d0ed4bf1c1d2205d556b10d8412e4976ca9 100755 --- a/models/multimodal/vision_language_model/llava_next_video_7b/vllm/README.md +++ b/models/multimodal/vision_language_model/llava_next_video_7b/vllm/README.md @@ -11,7 +11,8 @@ models on VideoMME bench. Base LLM: lmsys/vicuna-7b-v1.5 | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/minicpm_v/vllm/README.md b/models/multimodal/vision_language_model/minicpm_v/vllm/README.md index ea1c8d748e3daa6330fb59767289c4b2bb6dcc95..ef8fa31bfefc1a5bec2f0fc994967d50c10bf2cd 100644 --- a/models/multimodal/vision_language_model/minicpm_v/vllm/README.md +++ b/models/multimodal/vision_language_model/minicpm_v/vllm/README.md @@ -10,7 +10,8 @@ techniques, making it suitable for deployment in resource-constrained environmen | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/pixtral/vllm/README.md b/models/multimodal/vision_language_model/pixtral/vllm/README.md index bb3abd99e2f14eb82f410568c7573c40818cf154..5ef06c0e888f8ff9cf9c76a10b4aaf3a3da87e4f 100644 --- a/models/multimodal/vision_language_model/pixtral/vllm/README.md +++ b/models/multimodal/vision_language_model/pixtral/vllm/README.md @@ -8,7 +8,8 @@ Pixtral is trained to understand both natural images and documents, achieving 52 | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/nlp/llm/baichuan2-7b/vllm/README.md b/models/nlp/llm/baichuan2-7b/vllm/README.md index 95afd0d704412e783530ad82e7d4b060cc193784..21144b8abd2b83dddbd2b82b9975df5877edcf2b 100755 --- a/models/nlp/llm/baichuan2-7b/vllm/README.md +++ b/models/nlp/llm/baichuan2-7b/vllm/README.md @@ -11,7 +11,8 @@ its excellent capabilities in language understanding and generation.This release | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/chatglm3-6b-32k/vllm/README.md b/models/nlp/llm/chatglm3-6b-32k/vllm/README.md index e42fad9b60abf885bde0090c8761a95f8efdce95..7cfe5845ae4505f9621e1d03fd3f99318ca26da3 100644 --- a/models/nlp/llm/chatglm3-6b-32k/vllm/README.md +++ b/models/nlp/llm/chatglm3-6b-32k/vllm/README.md @@ -12,7 +12,8 @@ we recommend using ChatGLM3-6B-32K. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/chatglm3-6b/vllm/README.md b/models/nlp/llm/chatglm3-6b/vllm/README.md index 8f991f858d3d244053cc866afaa28a95920b7260..f8914d93ba4555e020852097aa7dbf71764592f4 100644 --- a/models/nlp/llm/chatglm3-6b/vllm/README.md +++ b/models/nlp/llm/chatglm3-6b/vllm/README.md @@ -10,7 +10,8 @@ translation. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/deepseek-r1-distill-llama-70b/vllm/README.md b/models/nlp/llm/deepseek-r1-distill-llama-70b/vllm/README.md index 539fa7303f190daa7d7e70dff12ea1c42e7ef677..2cd9a10e07bb302418be1b33f4399d3945546669 100644 --- a/models/nlp/llm/deepseek-r1-distill-llama-70b/vllm/README.md +++ b/models/nlp/llm/deepseek-r1-distill-llama-70b/vllm/README.md @@ -10,7 +10,8 @@ based on Qwen2.5 and Llama3 series to the community. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/deepseek-r1-distill-llama-8b/vllm/README.md b/models/nlp/llm/deepseek-r1-distill-llama-8b/vllm/README.md index 4f94a027fe88eb37c0532711d07407244e2c2f6a..61cc5aa2ae05b908a2cc8b725c834cb26eea5442 100644 --- a/models/nlp/llm/deepseek-r1-distill-llama-8b/vllm/README.md +++ b/models/nlp/llm/deepseek-r1-distill-llama-8b/vllm/README.md @@ -10,7 +10,8 @@ based on Qwen2.5 and Llama3 series to the community. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/deepseek-r1-distill-qwen-1.5b/vllm/README.md b/models/nlp/llm/deepseek-r1-distill-qwen-1.5b/vllm/README.md index 31d38e4defe3b17b3b47e2769767d67e3ed99cb9..6b3642bf485ebeed2a329875514e2cd49ef2e5aa 100644 --- a/models/nlp/llm/deepseek-r1-distill-qwen-1.5b/vllm/README.md +++ b/models/nlp/llm/deepseek-r1-distill-qwen-1.5b/vllm/README.md @@ -10,7 +10,8 @@ based on Qwen2.5 and Llama3 series to the community. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/deepseek-r1-distill-qwen-14b/vllm/README.md b/models/nlp/llm/deepseek-r1-distill-qwen-14b/vllm/README.md index 20c1e9b5d90da9155bc113bc0931f082ab193204..14ecabfaaa33258b1e6402b14f85e1c26570509c 100644 --- a/models/nlp/llm/deepseek-r1-distill-qwen-14b/vllm/README.md +++ b/models/nlp/llm/deepseek-r1-distill-qwen-14b/vllm/README.md @@ -10,7 +10,8 @@ DeepSeek-R1. We slightly change their configs and tokenizers. We open-source di | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/deepseek-r1-distill-qwen-32b/vllm/README.md b/models/nlp/llm/deepseek-r1-distill-qwen-32b/vllm/README.md index 7d83e8c313fa72fda62e57a86c49ed70f46b7c44..5b6611b83c061fc0b6bb54e96e5d9235abc29732 100644 --- a/models/nlp/llm/deepseek-r1-distill-qwen-32b/vllm/README.md +++ b/models/nlp/llm/deepseek-r1-distill-qwen-32b/vllm/README.md @@ -10,7 +10,8 @@ DeepSeek-R1. We slightly change their configs and tokenizers. We open-source di | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/deepseek-r1-distill-qwen-7b/vllm/README.md b/models/nlp/llm/deepseek-r1-distill-qwen-7b/vllm/README.md index 76612f367785e988023852ba52365c9d0fc807af..e5dd81660128385d214b052ad15cd4382dac4420 100644 --- a/models/nlp/llm/deepseek-r1-distill-qwen-7b/vllm/README.md +++ b/models/nlp/llm/deepseek-r1-distill-qwen-7b/vllm/README.md @@ -10,7 +10,8 @@ based on Qwen2.5 and Llama3 series to the community. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/llama2-13b/trtllm/README.md b/models/nlp/llm/llama2-13b/trtllm/README.md index e1b3931956844506f93cf5a66b995d4f170bb3df..cdcf07561f5f1abaf1335ec1d8a70752f40fd69c 100755 --- a/models/nlp/llm/llama2-13b/trtllm/README.md +++ b/models/nlp/llm/llama2-13b/trtllm/README.md @@ -11,7 +11,8 @@ from 7B to 70B. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/llama2-70b/trtllm/README.md b/models/nlp/llm/llama2-70b/trtllm/README.md index d1437323017abbfc4115a329164a1aa20ebbaec0..d03366503f2f407a961aae18a5aa429d921b9fb3 100644 --- a/models/nlp/llm/llama2-70b/trtllm/README.md +++ b/models/nlp/llm/llama2-70b/trtllm/README.md @@ -13,7 +13,8 @@ and contribute to the responsible development of LLMs. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/llama2-7b/trtllm/README.md b/models/nlp/llm/llama2-7b/trtllm/README.md index 9f69636b37a627e917bfe5f6280bf1472755f0c6..ecd8ddf8e67205670dfeb5351c322b3d7566894b 100644 --- a/models/nlp/llm/llama2-7b/trtllm/README.md +++ b/models/nlp/llm/llama2-7b/trtllm/README.md @@ -13,7 +13,8 @@ and contribute to the responsible development of LLMs. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/llama2-7b/vllm/README.md b/models/nlp/llm/llama2-7b/vllm/README.md index b4d7bf1c544d7081a72dc129d832321e20934cf7..78f9081126d24f2f9cae211817a0f0411d6ea5dd 100755 --- a/models/nlp/llm/llama2-7b/vllm/README.md +++ b/models/nlp/llm/llama2-7b/vllm/README.md @@ -13,7 +13,8 @@ and contribute to the responsible development of LLMs. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/llama3-70b/vllm/README.md b/models/nlp/llm/llama3-70b/vllm/README.md index 43a765890af32b710f3d51d3570fb1086b8c7a76..77ca3743e5df09f7ab6dd033cde0c8eaa0478a3f 100644 --- a/models/nlp/llm/llama3-70b/vllm/README.md +++ b/models/nlp/llm/llama3-70b/vllm/README.md @@ -13,7 +13,8 @@ large-scale AI applications, offering enhanced reasoning and instruction-followi | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/qwen-7b/vllm/README.md b/models/nlp/llm/qwen-7b/vllm/README.md index de2d1a7c848c8702100860262ab102359acf86d5..b7051d694ce3cfc695dd220319b415f223d0d295 100644 --- a/models/nlp/llm/qwen-7b/vllm/README.md +++ b/models/nlp/llm/qwen-7b/vllm/README.md @@ -13,7 +13,8 @@ developing intelligent agent applications. It also includes specialized versions | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/qwen1.5-14b/vllm/README.md b/models/nlp/llm/qwen1.5-14b/vllm/README.md index 5a520db3c0d97514a75b33b147dafc25d02b6244..fd431b2dbe809910ab31c536d7a90fbec1f3b1d9 100644 --- a/models/nlp/llm/qwen1.5-14b/vllm/README.md +++ b/models/nlp/llm/qwen1.5-14b/vllm/README.md @@ -12,7 +12,8 @@ not include GQA (except for 32B) and the mixture of SWA and full attention. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/qwen1.5-32b/vllm/README.md b/models/nlp/llm/qwen1.5-32b/vllm/README.md index 69ac33dd3ae55ecae42921688af03faa0969804e..158d882a6ca14e5d8d25ac0bd92ab7a910c538e5 100755 --- a/models/nlp/llm/qwen1.5-32b/vllm/README.md +++ b/models/nlp/llm/qwen1.5-32b/vllm/README.md @@ -11,7 +11,8 @@ have an improved tokenizer adaptive to multiple natural languages and codes. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/qwen1.5-72b/vllm/README.md b/models/nlp/llm/qwen1.5-72b/vllm/README.md index aba28082b11f403493366377b6905251f38f1d6a..ab26f60a1fbeaad9bb1c2346a138f76564435a16 100644 --- a/models/nlp/llm/qwen1.5-72b/vllm/README.md +++ b/models/nlp/llm/qwen1.5-72b/vllm/README.md @@ -12,7 +12,8 @@ not include GQA (except for 32B) and the mixture of SWA and full attention. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/qwen1.5-7b/tgi/README.md b/models/nlp/llm/qwen1.5-7b/tgi/README.md index e00da336eb14bece339e978deb13988d1b6f6160..34ea64308cf7fff62fc9789cc006cbaaf0c47562 100644 --- a/models/nlp/llm/qwen1.5-7b/tgi/README.md +++ b/models/nlp/llm/qwen1.5-7b/tgi/README.md @@ -12,7 +12,8 @@ not include GQA (except for 32B) and the mixture of SWA and full attention. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/qwen1.5-7b/vllm/README.md b/models/nlp/llm/qwen1.5-7b/vllm/README.md index 7a9cc65fb4cdc46ce56134a3727e2f3fbf84e671..6e71dc32cb2fc67ed1558c86b1accd9af6c87052 100644 --- a/models/nlp/llm/qwen1.5-7b/vllm/README.md +++ b/models/nlp/llm/qwen1.5-7b/vllm/README.md @@ -12,7 +12,8 @@ not include GQA (except for 32B) and the mixture of SWA and full attention. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/qwen2-72b/vllm/README.md b/models/nlp/llm/qwen2-72b/vllm/README.md index 74200cf7faeeae29150eaa2a1df4bf27d047c5aa..69a0cc9c3ae43c1a15ab0d144868df0ccc5c7f59 100755 --- a/models/nlp/llm/qwen2-72b/vllm/README.md +++ b/models/nlp/llm/qwen2-72b/vllm/README.md @@ -18,7 +18,8 @@ Please refer to this section for detailed instructions on how to deploy Qwen2 fo | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/qwen2-7b/vllm/README.md b/models/nlp/llm/qwen2-7b/vllm/README.md index 5bcd6b53e48e32fb9b8b617f100b066a37f8893d..b7b28e4584268e5b4d7ac0f3fcdccad71eac197a 100755 --- a/models/nlp/llm/qwen2-7b/vllm/README.md +++ b/models/nlp/llm/qwen2-7b/vllm/README.md @@ -17,7 +17,8 @@ Qwen2-7B-Instruct supports a context length of up to 131,072 tokens, enabling th | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/stablelm/vllm/README.md b/models/nlp/llm/stablelm/vllm/README.md index ffcdefdf8b1313d94ee31d40bb46ac0a89853e68..ceeafabe7f5c9ad92b9d667381ae269a67ed24ce 100644 --- a/models/nlp/llm/stablelm/vllm/README.md +++ b/models/nlp/llm/stablelm/vllm/README.md @@ -12,7 +12,8 @@ contextual relationships, which enhances the quality and accuracy of the generat | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/albert/ixrt/README.md b/models/nlp/plm/albert/ixrt/README.md index 778719bddff35be6d4fc5136b18e6efcc3d96da5..d53712c9d0cac7ba4c3e532365bb3566e5f0fd40 100644 --- a/models/nlp/plm/albert/ixrt/README.md +++ b/models/nlp/plm/albert/ixrt/README.md @@ -8,7 +8,8 @@ Albert (A Lite BERT) is a variant of the BERT (Bidirectional Encoder Representat | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/bert_base_ner/igie/README.md b/models/nlp/plm/bert_base_ner/igie/README.md index ab6fd88b69f9bf858b29ba982b794d8e62fdc9db..f90ceb6ee753ce56388bffc89407164893883ae7 100644 --- a/models/nlp/plm/bert_base_ner/igie/README.md +++ b/models/nlp/plm/bert_base_ner/igie/README.md @@ -8,7 +8,8 @@ BERT is designed to pre-train deep bidirectional representations from unlabeled | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/bert_base_squad/igie/README.md b/models/nlp/plm/bert_base_squad/igie/README.md index ac7477f96a7f4156213296e441bb1d6c621aca9d..9e42dde83f47c46d6545e6476868919cec76b51b 100644 --- a/models/nlp/plm/bert_base_squad/igie/README.md +++ b/models/nlp/plm/bert_base_squad/igie/README.md @@ -8,7 +8,8 @@ BERT is designed to pre-train deep bidirectional representations from unlabeled | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/bert_base_squad/ixrt/README.md b/models/nlp/plm/bert_base_squad/ixrt/README.md index 1f3dd3953f7a4a63491f336edec2e2223c9a9ac9..b9569a6cc481b640d130860a00330a77f21200e1 100644 --- a/models/nlp/plm/bert_base_squad/ixrt/README.md +++ b/models/nlp/plm/bert_base_squad/ixrt/README.md @@ -8,7 +8,8 @@ BERT is designed to pre-train deep bidirectional representations from unlabeled | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/bert_large_squad/igie/README.md b/models/nlp/plm/bert_large_squad/igie/README.md index e1d1435810c06a28204c8c676d9b3f2140539a08..9182202fec5e7568f09d24c05fe5b38f456ae47f 100644 --- a/models/nlp/plm/bert_large_squad/igie/README.md +++ b/models/nlp/plm/bert_large_squad/igie/README.md @@ -8,7 +8,8 @@ BERT is designed to pre-train deep bidirectional representations from unlabeled | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/bert_large_squad/ixrt/README.md b/models/nlp/plm/bert_large_squad/ixrt/README.md index f66034138a29e03f054007f7391aca3c539b6dd5..0670e856bd1bb8b8d630c68a730841d8bf33cf90 100644 --- a/models/nlp/plm/bert_large_squad/ixrt/README.md +++ b/models/nlp/plm/bert_large_squad/ixrt/README.md @@ -8,7 +8,8 @@ BERT is designed to pre-train deep bidirectional representations from unlabeled | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/deberta/ixrt/README.md b/models/nlp/plm/deberta/ixrt/README.md index 87496848406c894ec31d9886f4bbc6c6123980c1..cd2e30b13a60e956a4e9e6204655ef961e391df4 100644 --- a/models/nlp/plm/deberta/ixrt/README.md +++ b/models/nlp/plm/deberta/ixrt/README.md @@ -13,7 +13,8 @@ fine-tuning to better suit specific downstream tasks, thereby improving the mode | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/roberta/ixrt/README.md b/models/nlp/plm/roberta/ixrt/README.md index 92cc8e4eb8dfbb8e3490eab6aabaf38134c731b9..a25cf8047366a4b891174548b88b112f8bb27a4e 100644 --- a/models/nlp/plm/roberta/ixrt/README.md +++ b/models/nlp/plm/roberta/ixrt/README.md @@ -15,7 +15,8 @@ our models and code. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/roformer/ixrt/README.md b/models/nlp/plm/roformer/ixrt/README.md index 5d37b5e6eb6ac8d7c0ce107ee5b248e64ba96a11..3b90ce7a23671cc7c4f21ee79f2445c3a467d5e7 100644 --- a/models/nlp/plm/roformer/ixrt/README.md +++ b/models/nlp/plm/roformer/ixrt/README.md @@ -17,7 +17,8 @@ datasets. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/others/recommendation/wide_and_deep/ixrt/README.md b/models/others/recommendation/wide_and_deep/ixrt/README.md index 22796241f671d6bd7ff4280666270ea572dd8efb..f50911d1aa19f696282b0ab666ae3ee2a5ab84af 100644 --- a/models/others/recommendation/wide_and_deep/ixrt/README.md +++ b/models/others/recommendation/wide_and_deep/ixrt/README.md @@ -8,7 +8,8 @@ Generalized linear models with nonlinear feature transformations are widely used | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation