From 2a94ca03a2e37e24470c623482712625bc04b6f2 Mon Sep 17 00:00:00 2001 From: "hongliang.yuan" Date: Thu, 10 Jul 2025 17:12:05 +0800 Subject: [PATCH] update support 4.3.0 --- README.md | 368 +++++++++--------- .../conformer/igie/README.md | 3 +- .../conformer/ixrt/README.md | 3 +- .../cv/classification/alexnet/igie/README.md | 3 +- .../cv/classification/alexnet/ixrt/README.md | 3 +- models/cv/classification/clip/igie/README.md | 3 +- .../conformer_base/igie/README.md | 3 +- .../convnext_base/igie/README.md | 3 +- .../convnext_base/ixrt/README.md | 3 +- .../classification/convnext_s/igie/README.md | 3 +- .../convnext_small/igie/README.md | 3 +- .../convnext_small/ixrt/README.md | 3 +- .../convnext_tiny/igie/README.md | 3 +- .../cspdarknet53/igie/README.md | 3 +- .../cspdarknet53/ixrt/README.md | 3 +- .../classification/cspresnet50/igie/README.md | 3 +- .../classification/cspresnet50/ixrt/README.md | 3 +- .../cspresnext50/igie/README.md | 3 +- .../classification/deit_tiny/igie/README.md | 3 +- .../classification/deit_tiny/ixrt/README.md | 3 +- .../classification/densenet121/igie/README.md | 3 +- .../classification/densenet121/ixrt/README.md | 3 +- .../classification/densenet161/igie/README.md | 3 +- .../classification/densenet161/ixrt/README.md | 3 +- .../classification/densenet169/igie/README.md | 3 +- .../classification/densenet169/ixrt/README.md | 3 +- .../classification/densenet201/igie/README.md | 3 +- .../classification/densenet201/ixrt/README.md | 3 +- .../efficientnet_b0/igie/README.md | 3 +- .../efficientnet_b0/ixrt/README.md | 3 +- .../efficientnet_b1/igie/README.md | 3 +- .../efficientnet_b1/ixrt/README.md | 3 +- .../efficientnet_b2/igie/README.md | 3 +- .../efficientnet_b2/ixrt/README.md | 3 +- .../efficientnet_b3/igie/README.md | 3 +- .../efficientnet_b3/ixrt/README.md | 3 +- .../efficientnet_b4/igie/README.md | 3 +- .../efficientnet_b5/igie/README.md | 3 +- .../efficientnet_v2/igie/README.md | 3 +- .../efficientnet_v2/ixrt/README.md | 3 +- .../efficientnet_v2_s/igie/README.md | 3 +- .../efficientnet_v2_s/ixrt/README.md | 3 +- .../efficientnetv2_rw_t/igie/README.md | 3 +- .../efficientnetv2_rw_t/ixrt/README.md | 3 +- .../classification/googlenet/igie/README.md | 3 +- .../classification/googlenet/ixrt/README.md | 3 +- .../classification/hrnet_w18/igie/README.md | 3 +- .../classification/hrnet_w18/ixrt/README.md | 3 +- .../inception_resnet_v2/ixrt/README.md | 3 +- .../inception_v3/igie/README.md | 3 +- .../inception_v3/ixrt/README.md | 3 +- .../mlp_mixer_base/igie/README.md | 3 +- .../classification/mnasnet0_5/igie/README.md | 3 +- .../classification/mnasnet0_75/igie/README.md | 3 +- .../classification/mnasnet1_0/igie/README.md | 3 +- .../mobilenet_v2/igie/README.md | 3 +- .../mobilenet_v2/ixrt/README.md | 3 +- .../mobilenet_v3/igie/README.md | 3 +- .../mobilenet_v3/ixrt/README.md | 3 +- .../mobilenet_v3_large/igie/README.md | 3 +- .../regnet_x_16gf/igie/README.md | 3 +- .../regnet_x_1_6gf/igie/README.md | 3 +- .../regnet_x_3_2gf/igie/README.md | 3 +- .../regnet_y_16gf/igie/README.md | 3 +- .../regnet_y_1_6gf/igie/README.md | 3 +- .../cv/classification/repvgg/igie/README.md | 3 +- .../cv/classification/repvgg/ixrt/README.md | 3 +- .../classification/res2net50/igie/README.md | 3 +- .../classification/res2net50/ixrt/README.md | 3 +- .../classification/resnest50/igie/README.md | 3 +- .../classification/resnet101/igie/README.md | 3 +- .../classification/resnet101/ixrt/README.md | 3 +- .../classification/resnet152/igie/README.md | 3 +- .../cv/classification/resnet18/igie/README.md | 3 +- .../cv/classification/resnet18/ixrt/README.md | 3 +- .../cv/classification/resnet34/ixrt/README.md | 3 +- .../cv/classification/resnet50/igie/README.md | 3 +- .../cv/classification/resnet50/ixrt/README.md | 3 +- .../classification/resnetv1d50/igie/README.md | 3 +- .../classification/resnetv1d50/ixrt/README.md | 3 +- .../resnext101_32x8d/igie/README.md | 3 +- .../resnext101_32x8d/ixrt/README.md | 3 +- .../resnext101_64x4d/igie/README.md | 3 +- .../resnext101_64x4d/ixrt/README.md | 3 +- .../resnext50_32x4d/igie/README.md | 3 +- .../resnext50_32x4d/ixrt/README.md | 3 +- .../classification/seresnet50/igie/README.md | 3 +- .../shufflenet_v1/ixrt/README.md | 3 +- .../shufflenetv2_x0_5/igie/README.md | 3 +- .../shufflenetv2_x0_5/ixrt/README.md | 3 +- .../shufflenetv2_x1_0/igie/README.md | 3 +- .../shufflenetv2_x1_0/ixrt/README.md | 3 +- .../shufflenetv2_x1_5/igie/README.md | 3 +- .../shufflenetv2_x1_5/ixrt/README.md | 3 +- .../shufflenetv2_x2_0/igie/README.md | 3 +- .../shufflenetv2_x2_0/ixrt/README.md | 3 +- .../squeezenet_v1_0/igie/README.md | 3 +- .../squeezenet_v1_0/ixrt/README.md | 3 +- .../squeezenet_v1_1/igie/README.md | 3 +- .../squeezenet_v1_1/ixrt/README.md | 3 +- .../cv/classification/svt_base/igie/README.md | 3 +- .../swin_transformer/igie/README.md | 3 +- .../swin_transformer_large/ixrt/README.md | 3 +- .../classification/twins_pcpvt/igie/README.md | 3 +- .../cv/classification/van_b0/igie/README.md | 3 +- models/cv/classification/vgg11/igie/README.md | 3 +- models/cv/classification/vgg16/igie/README.md | 3 +- models/cv/classification/vgg16/ixrt/README.md | 3 +- models/cv/classification/vgg19/igie/README.md | 3 +- .../cv/classification/vgg19_bn/igie/README.md | 3 +- models/cv/classification/vit/igie/README.md | 3 +- .../wide_resnet101/igie/README.md | 3 +- .../wide_resnet50/igie/README.md | 3 +- .../wide_resnet50/ixrt/README.md | 3 +- .../face_recognition/facenet/ixrt/README.md | 3 +- .../solov1/ixrt/README.md | 3 +- .../deepsort/igie/README.md | 3 +- .../fastreid/igie/README.md | 3 +- .../repnet/igie/README.md | 3 +- .../cv/object_detection/atss/igie/README.md | 3 +- .../object_detection/centernet/igie/README.md | 3 +- .../object_detection/centernet/ixrt/README.md | 3 +- .../cv/object_detection/detr/ixrt/README.md | 3 +- .../cv/object_detection/fcos/igie/README.md | 3 +- .../cv/object_detection/fcos/ixrt/README.md | 3 +- .../object_detection/foveabox/igie/README.md | 3 +- .../object_detection/foveabox/ixrt/README.md | 3 +- .../cv/object_detection/fsaf/igie/README.md | 3 +- .../cv/object_detection/fsaf/ixrt/README.md | 3 +- models/cv/object_detection/gfl/igie/README.md | 3 +- .../cv/object_detection/hrnet/igie/README.md | 3 +- .../cv/object_detection/hrnet/ixrt/README.md | 3 +- models/cv/object_detection/paa/igie/README.md | 3 +- .../retinaface/igie/README.md | 3 +- .../retinaface/ixrt/README.md | 3 +- .../object_detection/retinanet/igie/README.md | 3 +- .../cv/object_detection/rtmdet/igie/README.md | 3 +- .../cv/object_detection/sabl/igie/README.md | 3 +- .../object_detection/yolov10/igie/README.md | 3 +- .../object_detection/yolov10/ixrt/README.md | 3 +- .../object_detection/yolov11/igie/README.md | 3 +- .../object_detection/yolov11/ixrt/README.md | 3 +- .../object_detection/yolov12/igie/README.md | 3 +- .../cv/object_detection/yolov3/igie/README.md | 3 +- .../cv/object_detection/yolov3/ixrt/README.md | 3 +- .../cv/object_detection/yolov4/igie/README.md | 3 +- .../cv/object_detection/yolov4/ixrt/README.md | 3 +- .../cv/object_detection/yolov5/igie/README.md | 3 +- .../cv/object_detection/yolov5/ixrt/README.md | 3 +- .../object_detection/yolov5s/ixrt/README.md | 3 +- .../cv/object_detection/yolov6/igie/README.md | 3 +- .../cv/object_detection/yolov6/ixrt/README.md | 3 +- .../cv/object_detection/yolov7/igie/README.md | 3 +- .../cv/object_detection/yolov7/ixrt/README.md | 3 +- .../cv/object_detection/yolov8/igie/README.md | 3 +- .../cv/object_detection/yolov8/ixrt/README.md | 3 +- .../cv/object_detection/yolov9/igie/README.md | 3 +- .../cv/object_detection/yolov9/ixrt/README.md | 3 +- .../cv/object_detection/yolox/igie/README.md | 3 +- .../cv/object_detection/yolox/ixrt/README.md | 3 +- models/cv/ocr/kie_layoutxlm/igie/README.md | 3 +- models/cv/ocr/svtr/igie/README.md | 3 +- .../pose_estimation/hrnetpose/igie/README.md | 3 +- .../lightweight_openpose/ixrt/README.md | 3 +- .../cv/pose_estimation/rtmpose/igie/README.md | 3 +- .../cv/pose_estimation/rtmpose/ixrt/README.md | 3 +- .../semantic_segmentation/unet/igie/README.md | 3 +- .../stable-diffusion/diffusers/README.md | 3 +- .../vision_language_model/aria/vllm/README.md | 1 + .../chameleon_7b/vllm/README.md | 3 +- .../clip/ixformer/README.md | 3 +- .../fuyu_8b/vllm/README.md | 3 +- .../h2vol/vllm/README.md | 1 + .../idefics3/vllm/README.md | 1 + .../intern_vl/vllm/README.md | 3 +- .../llama-3.2/vllm/README.md | 3 +- .../llava/vllm/README.md | 3 +- .../llava_next_video_7b/vllm/README.md | 3 +- .../minicpm_v/vllm/README.md | 3 +- .../pixtral/vllm/README.md | 3 +- models/nlp/llm/baichuan2-7b/vllm/README.md | 3 +- models/nlp/llm/chatglm3-6b-32k/vllm/README.md | 3 +- models/nlp/llm/chatglm3-6b/vllm/README.md | 3 +- .../vllm/README.md | 3 +- .../vllm/README.md | 3 +- .../vllm/README.md | 3 +- .../vllm/README.md | 3 +- .../vllm/README.md | 3 +- .../vllm/README.md | 3 +- models/nlp/llm/llama2-13b/trtllm/README.md | 3 +- models/nlp/llm/llama2-70b/trtllm/README.md | 3 +- models/nlp/llm/llama2-7b/trtllm/README.md | 3 +- models/nlp/llm/llama2-7b/vllm/README.md | 3 +- models/nlp/llm/llama3-70b/vllm/README.md | 3 +- models/nlp/llm/qwen-7b/vllm/README.md | 3 +- models/nlp/llm/qwen1.5-14b/vllm/README.md | 3 +- models/nlp/llm/qwen1.5-32b/vllm/README.md | 3 +- models/nlp/llm/qwen1.5-72b/vllm/README.md | 3 +- models/nlp/llm/qwen1.5-7b/tgi/README.md | 3 +- models/nlp/llm/qwen1.5-7b/vllm/README.md | 3 +- models/nlp/llm/qwen2-72b/vllm/README.md | 3 +- models/nlp/llm/qwen2-7b/vllm/README.md | 3 +- models/nlp/llm/stablelm/vllm/README.md | 3 +- models/nlp/plm/albert/ixrt/README.md | 3 +- models/nlp/plm/bert_base_ner/igie/README.md | 3 +- models/nlp/plm/bert_base_squad/igie/README.md | 3 +- models/nlp/plm/bert_base_squad/ixrt/README.md | 3 +- .../nlp/plm/bert_large_squad/igie/README.md | 3 +- .../nlp/plm/bert_large_squad/ixrt/README.md | 3 +- models/nlp/plm/deberta/ixrt/README.md | 3 +- models/nlp/plm/roberta/ixrt/README.md | 3 +- models/nlp/plm/roformer/ixrt/README.md | 3 +- .../wide_and_deep/ixrt/README.md | 3 +- 213 files changed, 605 insertions(+), 393 deletions(-) diff --git a/README.md b/README.md index 88b38054..2f4f0a05 100644 --- a/README.md +++ b/README.md @@ -26,27 +26,27 @@ DeepSparkInference将按季度进行版本更新,后续会逐步丰富模型 | Model | vLLM | TRT-LLM | TGI | IXUCA SDK | |-------------------------------|--------------------------------------------------------|---------------------------------------|------------------------------------|-----------| -| Baichuan2-7B | [✅](models/nlp/llm/baichuan2-7b/vllm) | | | 4.2.0 | -| ChatGLM-3-6B | [✅](models/nlp/llm/chatglm3-6b/vllm) | | | 4.2.0 | -| ChatGLM-3-6B-32K | [✅](models/nlp/llm/chatglm3-6b-32k/vllm) | | | 4.2.0 | -| DeepSeek-R1-Distill-Llama-8B | [✅](models/nlp/llm/deepseek-r1-distill-llama-8b/vllm) | | | 4.2.0 | -| DeepSeek-R1-Distill-Llama-70B | [✅](models/nlp/llm/deepseek-r1-distill-llama-70b/vllm) | | | 4.2.0 | -| DeepSeek-R1-Distill-Qwen-1.5B | [✅](models/nlp/llm/deepseek-r1-distill-qwen-1.5b/vllm) | | | 4.2.0 | -| DeepSeek-R1-Distill-Qwen-7B | [✅](models/nlp/llm/deepseek-r1-distill-qwen-7b/vllm) | | | 4.2.0 | -| DeepSeek-R1-Distill-Qwen-14B | [✅](models/nlp/llm/deepseek-r1-distill-qwen-14b/vllm) | | | 4.2.0 | -| DeepSeek-R1-Distill-Qwen-32B | [✅](models/nlp/llm/deepseek-r1-distill-qwen-32b/vllm) | | | 4.2.0 | -| Llama2-7B | [✅](models/nlp/llm/llama2-7b/vllm) | [✅](models/nlp/llm/llama2-7b/trtllm) | | 4.2.0 | -| Llama2-13B | | [✅](models/nlp/llm/llama2-13b/trtllm) | | 4.2.0 | -| Llama2-70B | | [✅](models/nlp/llm/llama2-70b/trtllm) | | 4.2.0 | -| Llama3-70B | [✅](models/nlp/llm/llama3-70b/vllm) | | | 4.2.0 | -| Qwen-7B | [✅](models/nlp/llm/qwen-7b/vllm) | | | 4.2.0 | -| Qwen1.5-7B | [✅](models/nlp/llm/qwen1.5-7b/vllm) | | [✅](models/nlp/llm/qwen1.5-7b/tgi) | 4.2.0 | -| Qwen1.5-14B | [✅](models/nlp/llm/qwen1.5-14b/vllm) | | | 4.2.0 | -| Qwen1.5-32B Chat | [✅](models/nlp/llm/qwen1.5-32b/vllm) | | | 4.2.0 | -| Qwen1.5-72B | [✅](models/nlp/llm/qwen1.5-72b/vllm) | | | 4.2.0 | -| Qwen2-7B Instruct | [✅](models/nlp/llm/qwen2-7b/vllm) | | | 4.2.0 | -| Qwen2-72B Instruct | [✅](models/nlp/llm/qwen2-72b/vllm) | | | 4.2.0 | -| StableLM2-1.6B | [✅](models/nlp/llm/stablelm/vllm) | | | 4.2.0 | +| Baichuan2-7B | [✅](models/nlp/llm/baichuan2-7b/vllm) | | | 4.3.0 | +| ChatGLM-3-6B | [✅](models/nlp/llm/chatglm3-6b/vllm) | | | 4.3.0 | +| ChatGLM-3-6B-32K | [✅](models/nlp/llm/chatglm3-6b-32k/vllm) | | | 4.3.0 | +| DeepSeek-R1-Distill-Llama-8B | [✅](models/nlp/llm/deepseek-r1-distill-llama-8b/vllm) | | | 4.3.0 | +| DeepSeek-R1-Distill-Llama-70B | [✅](models/nlp/llm/deepseek-r1-distill-llama-70b/vllm) | | | 4.3.0 | +| DeepSeek-R1-Distill-Qwen-1.5B | [✅](models/nlp/llm/deepseek-r1-distill-qwen-1.5b/vllm) | | | 4.3.0 | +| DeepSeek-R1-Distill-Qwen-7B | [✅](models/nlp/llm/deepseek-r1-distill-qwen-7b/vllm) | | | 4.3.0 | +| DeepSeek-R1-Distill-Qwen-14B | [✅](models/nlp/llm/deepseek-r1-distill-qwen-14b/vllm) | | | 4.3.0 | +| DeepSeek-R1-Distill-Qwen-32B | [✅](models/nlp/llm/deepseek-r1-distill-qwen-32b/vllm) | | | 4.3.0 | +| Llama2-7B | [✅](models/nlp/llm/llama2-7b/vllm) | [✅](models/nlp/llm/llama2-7b/trtllm) | | 4.3.0 | +| Llama2-13B | | [✅](models/nlp/llm/llama2-13b/trtllm) | | 4.3.0 | +| Llama2-70B | | [✅](models/nlp/llm/llama2-70b/trtllm) | | 4.3.0 | +| Llama3-70B | [✅](models/nlp/llm/llama3-70b/vllm) | | | 4.3.0 | +| Qwen-7B | [✅](models/nlp/llm/qwen-7b/vllm) | | | 4.3.0 | +| Qwen1.5-7B | [✅](models/nlp/llm/qwen1.5-7b/vllm) | | [✅](models/nlp/llm/qwen1.5-7b/tgi) | 4.3.0 | +| Qwen1.5-14B | [✅](models/nlp/llm/qwen1.5-14b/vllm) | | | 4.3.0 | +| Qwen1.5-32B Chat | [✅](models/nlp/llm/qwen1.5-32b/vllm) | | | 4.3.0 | +| Qwen1.5-72B | [✅](models/nlp/llm/qwen1.5-72b/vllm) | | | 4.3.0 | +| Qwen2-7B Instruct | [✅](models/nlp/llm/qwen2-7b/vllm) | | | 4.3.0 | +| Qwen2-72B Instruct | [✅](models/nlp/llm/qwen2-72b/vllm) | | | 4.3.0 | +| StableLM2-1.6B | [✅](models/nlp/llm/stablelm/vllm) | | | 4.3.0 | ### 计算机视觉(CV) @@ -54,200 +54,200 @@ DeepSparkInference将按季度进行版本更新,后续会逐步丰富模型 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |------------------------|-------|--------------------------------------------------------|-----------------------------------------------------------|-----------| -| AlexNet | FP16 | [✅](models/cv/classification/alexnet/igie) | [✅](models/cv/classification/alexnet/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/alexnet/igie) | [✅](models/cv/classification/alexnet/ixrt) | 4.2.0 | -| CLIP | FP16 | [✅](models/cv/classification/clip/igie) | | 4.2.0 | -| Conformer-B | FP16 | [✅](models/cv/classification/conformer_base/igie) | | 4.2.0 | -| ConvNeXt-Base | FP16 | [✅](models/cv/classification/convnext_base/igie) | [✅](models/cv/classification/convnext_base/ixrt) | 4.2.0 | -| ConvNext-S | FP16 | [✅](models/cv/classification/convnext_s/igie) | | 4.2.0 | -| ConvNeXt-Small | FP16 | [✅](models/cv/classification/convnext_small/igie) | [✅](models/cv/classification/convnext_small/ixrt) | 4.2.0 | -| ConvNeXt-Tiny | FP16 | [✅](models/cv/classification/convnext_tiny/igie) | | 4.2.0 | -| CSPDarkNet53 | FP16 | [✅](models/cv/classification/cspdarknet53/igie) | [✅](models/cv/classification/cspdarknet53/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/cspdarknet53/ixrt) | 4.2.0 | -| CSPResNet50 | FP16 | [✅](models/cv/classification/cspresnet50/igie) | [✅](models/cv/classification/cspresnet50/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/cspresnet50/ixrt) | 4.2.0 | -| CSPResNeXt50 | FP16 | [✅](models/cv/classification/cspresnext50/igie) | | 4.2.0 | -| DeiT-tiny | FP16 | [✅](models/cv/classification/deit_tiny/igie) | [✅](models/cv/classification/deit_tiny/ixrt) | 4.2.0 | -| DenseNet121 | FP16 | [✅](models/cv/classification/densenet121/igie) | [✅](models/cv/classification/densenet121/ixrt) | 4.2.0 | -| DenseNet161 | FP16 | [✅](models/cv/classification/densenet161/igie) | [✅](models/cv/classification/densenet161/ixrt) | 4.2.0 | -| DenseNet169 | FP16 | [✅](models/cv/classification/densenet169/igie) | [✅](models/cv/classification/densenet169/ixrt) | 4.2.0 | -| DenseNet201 | FP16 | [✅](models/cv/classification/densenet201/igie) | [✅](models/cv/classification/densenet201/ixrt) | 4.2.0 | -| EfficientNet-B0 | FP16 | [✅](models/cv/classification/efficientnet_b0/igie) | [✅](models/cv/classification/efficientnet_b0/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/efficientnet_b0/ixrt) | 4.2.0 | -| EfficientNet-B1 | FP16 | [✅](models/cv/classification/efficientnet_b1/igie) | [✅](models/cv/classification/efficientnet_b1/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/efficientnet_b1/ixrt) | 4.2.0 | -| EfficientNet-B2 | FP16 | [✅](models/cv/classification/efficientnet_b2/igie) | [✅](models/cv/classification/efficientnet_b2/ixrt) | 4.2.0 | -| EfficientNet-B3 | FP16 | [✅](models/cv/classification/efficientnet_b3/igie) | [✅](models/cv/classification/efficientnet_b3/ixrt) | 4.2.0 | -| EfficientNet-B4 | FP16 | [✅](models/cv/classification/efficientnet_b4/igie) | | 4.2.0 | -| EfficientNet-B5 | FP16 | [✅](models/cv/classification/efficientnet_b5/igie) | | 4.2.0 | -| EfficientNetV2 | FP16 | [✅](models/cv/classification/efficientnet_v2/igie) | [✅](models/cv/classification/efficientnet_v2/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/efficientnet_v2/ixrt) | 4.2.0 | -| EfficientNetv2_rw_t | FP16 | [✅](models/cv/classification/efficientnetv2_rw_t/igie) | [✅](models/cv/classification/efficientnetv2_rw_t/ixrt) | 4.2.0 | -| EfficientNetv2_s | FP16 | [✅](models/cv/classification/efficientnet_v2_s/igie) | [✅](models/cv/classification/efficientnet_v2_s/ixrt) | 4.2.0 | -| GoogLeNet | FP16 | [✅](models/cv/classification/googlenet/igie) | [✅](models/cv/classification/googlenet/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/googlenet/igie) | [✅](models/cv/classification/googlenet/ixrt) | 4.2.0 | -| HRNet-W18 | FP16 | [✅](models/cv/classification/hrnet_w18/igie) | [✅](models/cv/classification/hrnet_w18/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/hrnet_w18/ixrt) | 4.2.0 | -| InceptionV3 | FP16 | [✅](models/cv/classification/inception_v3/igie) | [✅](models/cv/classification/inception_v3/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/inception_v3/igie) | [✅](models/cv/classification/inception_v3/ixrt) | 4.2.0 | -| Inception-ResNet-V2 | FP16 | | [✅](models/cv/classification/inception_resnet_v2/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/inception_resnet_v2/ixrt) | 4.2.0 | -| Mixer_B | FP16 | [✅](models/cv/classification/mlp_mixer_base/igie) | | 4.2.0 | -| MNASNet0_5 | FP16 | [✅](models/cv/classification/mnasnet0_5/igie) | | 4.2.0 | -| MNASNet0_75 | FP16 | [✅](models/cv/classification/mnasnet0_75/igie) | | 4.2.0 | -| MNASNet1_0 | FP16 | [✅](models/cv/classification/mnasnet1_0/igie) | | 4.2.0 | -| MobileNetV2 | FP16 | [✅](models/cv/classification/mobilenet_v2/igie) | [✅](models/cv/classification/mobilenet_v2/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/mobilenet_v2/igie) | [✅](models/cv/classification/mobilenet_v2/ixrt) | 4.2.0 | -| MobileNetV3_Large | FP16 | [✅](models/cv/classification/mobilenet_v3_large/igie) | | 4.2.0 | -| MobileNetV3_Small | FP16 | [✅](models/cv/classification/mobilenet_v3/igie) | [✅](models/cv/classification/mobilenet_v3/ixrt) | 4.2.0 | +| AlexNet | FP16 | [✅](models/cv/classification/alexnet/igie) | [✅](models/cv/classification/alexnet/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/alexnet/igie) | [✅](models/cv/classification/alexnet/ixrt) | 4.3.0 | +| CLIP | FP16 | [✅](models/cv/classification/clip/igie) | | 4.3.0 | +| Conformer-B | FP16 | [✅](models/cv/classification/conformer_base/igie) | | 4.3.0 | +| ConvNeXt-Base | FP16 | [✅](models/cv/classification/convnext_base/igie) | [✅](models/cv/classification/convnext_base/ixrt) | 4.3.0 | +| ConvNext-S | FP16 | [✅](models/cv/classification/convnext_s/igie) | | 4.3.0 | +| ConvNeXt-Small | FP16 | [✅](models/cv/classification/convnext_small/igie) | [✅](models/cv/classification/convnext_small/ixrt) | 4.3.0 | +| ConvNeXt-Tiny | FP16 | [✅](models/cv/classification/convnext_tiny/igie) | | 4.3.0 | +| CSPDarkNet53 | FP16 | [✅](models/cv/classification/cspdarknet53/igie) | [✅](models/cv/classification/cspdarknet53/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/cspdarknet53/ixrt) | 4.3.0 | +| CSPResNet50 | FP16 | [✅](models/cv/classification/cspresnet50/igie) | [✅](models/cv/classification/cspresnet50/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/cspresnet50/ixrt) | 4.3.0 | +| CSPResNeXt50 | FP16 | [✅](models/cv/classification/cspresnext50/igie) | | 4.3.0 | +| DeiT-tiny | FP16 | [✅](models/cv/classification/deit_tiny/igie) | [✅](models/cv/classification/deit_tiny/ixrt) | 4.3.0 | +| DenseNet121 | FP16 | [✅](models/cv/classification/densenet121/igie) | [✅](models/cv/classification/densenet121/ixrt) | 4.3.0 | +| DenseNet161 | FP16 | [✅](models/cv/classification/densenet161/igie) | [✅](models/cv/classification/densenet161/ixrt) | 4.3.0 | +| DenseNet169 | FP16 | [✅](models/cv/classification/densenet169/igie) | [✅](models/cv/classification/densenet169/ixrt) | 4.3.0 | +| DenseNet201 | FP16 | [✅](models/cv/classification/densenet201/igie) | [✅](models/cv/classification/densenet201/ixrt) | 4.3.0 | +| EfficientNet-B0 | FP16 | [✅](models/cv/classification/efficientnet_b0/igie) | [✅](models/cv/classification/efficientnet_b0/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/efficientnet_b0/ixrt) | 4.3.0 | +| EfficientNet-B1 | FP16 | [✅](models/cv/classification/efficientnet_b1/igie) | [✅](models/cv/classification/efficientnet_b1/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/efficientnet_b1/ixrt) | 4.3.0 | +| EfficientNet-B2 | FP16 | [✅](models/cv/classification/efficientnet_b2/igie) | [✅](models/cv/classification/efficientnet_b2/ixrt) | 4.3.0 | +| EfficientNet-B3 | FP16 | [✅](models/cv/classification/efficientnet_b3/igie) | [✅](models/cv/classification/efficientnet_b3/ixrt) | 4.3.0 | +| EfficientNet-B4 | FP16 | [✅](models/cv/classification/efficientnet_b4/igie) | | 4.3.0 | +| EfficientNet-B5 | FP16 | [✅](models/cv/classification/efficientnet_b5/igie) | | 4.3.0 | +| EfficientNetV2 | FP16 | [✅](models/cv/classification/efficientnet_v2/igie) | [✅](models/cv/classification/efficientnet_v2/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/efficientnet_v2/ixrt) | 4.3.0 | +| EfficientNetv2_rw_t | FP16 | [✅](models/cv/classification/efficientnetv2_rw_t/igie) | [✅](models/cv/classification/efficientnetv2_rw_t/ixrt) | 4.3.0 | +| EfficientNetv2_s | FP16 | [✅](models/cv/classification/efficientnet_v2_s/igie) | [✅](models/cv/classification/efficientnet_v2_s/ixrt) | 4.3.0 | +| GoogLeNet | FP16 | [✅](models/cv/classification/googlenet/igie) | [✅](models/cv/classification/googlenet/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/googlenet/igie) | [✅](models/cv/classification/googlenet/ixrt) | 4.3.0 | +| HRNet-W18 | FP16 | [✅](models/cv/classification/hrnet_w18/igie) | [✅](models/cv/classification/hrnet_w18/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/hrnet_w18/ixrt) | 4.3.0 | +| InceptionV3 | FP16 | [✅](models/cv/classification/inception_v3/igie) | [✅](models/cv/classification/inception_v3/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/inception_v3/igie) | [✅](models/cv/classification/inception_v3/ixrt) | 4.3.0 | +| Inception-ResNet-V2 | FP16 | | [✅](models/cv/classification/inception_resnet_v2/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/inception_resnet_v2/ixrt) | 4.3.0 | +| Mixer_B | FP16 | [✅](models/cv/classification/mlp_mixer_base/igie) | | 4.3.0 | +| MNASNet0_5 | FP16 | [✅](models/cv/classification/mnasnet0_5/igie) | | 4.3.0 | +| MNASNet0_75 | FP16 | [✅](models/cv/classification/mnasnet0_75/igie) | | 4.3.0 | +| MNASNet1_0 | FP16 | [✅](models/cv/classification/mnasnet1_0/igie) | | 4.3.0 | +| MobileNetV2 | FP16 | [✅](models/cv/classification/mobilenet_v2/igie) | [✅](models/cv/classification/mobilenet_v2/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/mobilenet_v2/igie) | [✅](models/cv/classification/mobilenet_v2/ixrt) | 4.3.0 | +| MobileNetV3_Large | FP16 | [✅](models/cv/classification/mobilenet_v3_large/igie) | | 4.3.0 | +| MobileNetV3_Small | FP16 | [✅](models/cv/classification/mobilenet_v3/igie) | [✅](models/cv/classification/mobilenet_v3/ixrt) | 4.3.0 | | MViTv2_base | FP16 | [✅](models/cv/classification/mvitv2_base/igie) | | 4.2.0 | -| RegNet_x_16gf | FP16 | [✅](models/cv/classification/regnet_x_16gf/igie) | | 4.2.0 | -| RegNet_x_1_6gf | FP16 | [✅](models/cv/classification/regnet_x_1_6gf/igie) | | 4.2.0 | -| RegNet_x_3_2gf | FP16 | [✅](models/cv/classification/regnet_x_3_2gf/igie) | | 4.2.0 | -| RegNet_y_1_6gf | FP16 | [✅](models/cv/classification/regnet_y_1_6gf/igie) | | 4.2.0 | -| RegNet_y_16gf | FP16 | [✅](models/cv/classification/regnet_y_16gf/igie) | | 4.2.0 | -| RepVGG | FP16 | [✅](models/cv/classification/repvgg/igie) | [✅](models/cv/classification/repvgg/ixrt) | 4.2.0 | -| Res2Net50 | FP16 | [✅](models/cv/classification/res2net50/igie) | [✅](models/cv/classification/res2net50/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/res2net50/ixrt) | 4.2.0 | -| ResNeSt50 | FP16 | [✅](models/cv/classification/resnest50/igie) | | 4.2.0 | -| ResNet101 | FP16 | [✅](models/cv/classification/resnet101/igie) | [✅](models/cv/classification/resnet101/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/resnet101/igie) | [✅](models/cv/classification/resnet101/ixrt) | 4.2.0 | -| ResNet152 | FP16 | [✅](models/cv/classification/resnet152/igie) | | 4.2.0 | -| | INT8 | [✅](models/cv/classification/resnet152/igie) | | 4.2.0 | -| ResNet18 | FP16 | [✅](models/cv/classification/resnet18/igie) | [✅](models/cv/classification/resnet18/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/resnet18/igie) | [✅](models/cv/classification/resnet18/ixrt) | 4.2.0 | -| ResNet34 | FP16 | | [✅](models/cv/classification/resnet34/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/resnet34/ixrt) | 4.2.0 | -| ResNet50 | FP16 | [✅](models/cv/classification/resnet50/igie) | [✅](models/cv/classification/resnet50/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/resnet50/igie) | | 4.2.0 | -| ResNetV1D50 | FP16 | [✅](models/cv/classification/resnetv1d50/igie) | [✅](models/cv/classification/resnetv1d50/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/resnetv1d50/ixrt) | 4.2.0 | -| ResNeXt50_32x4d | FP16 | [✅](models/cv/classification/resnext50_32x4d/igie) | [✅](models/cv/classification/resnext50_32x4d/ixrt) | 4.2.0 | -| ResNeXt101_64x4d | FP16 | [✅](models/cv/classification/resnext101_64x4d/igie) | [✅](models/cv/classification/resnext101_64x4d/ixrt) | 4.2.0 | -| ResNeXt101_32x8d | FP16 | [✅](models/cv/classification/resnext101_32x8d/igie) | [✅](models/cv/classification/resnext101_32x8d/ixrt) | 4.2.0 | -| SEResNet50 | FP16 | [✅](models/cv/classification/se_resnet50/igie) | | 4.2.0 | -| ShuffleNetV1 | FP16 | | [✅](models/cv/classification/shufflenet_v1/ixrt) | 4.2.0 | -| ShuffleNetV2_x0_5 | FP16 | [✅](models/cv/classification/shufflenetv2_x0_5/igie) | [✅](models/cv/classification/shufflenetv2_x0_5/ixrt) | 4.2.0 | -| ShuffleNetV2_x1_0 | FP16 | [✅](models/cv/classification/shufflenetv2_x1_0/igie) | [✅](models/cv/classification/shufflenetv2_x1_0/ixrt) | 4.2.0 | -| ShuffleNetV2_x1_5 | FP16 | [✅](models/cv/classification/shufflenetv2_x1_5/igie) | [✅](models/cv/classification/shufflenetv2_x1_5/ixrt) | 4.2.0 | -| ShuffleNetV2_x2_0 | FP16 | [✅](models/cv/classification/shufflenetv2_x2_0/igie) | [✅](models/cv/classification/shufflenetv2_x2_0/ixrt) | 4.2.0 | -| SqueezeNet 1.0 | FP16 | [✅](models/cv/classification/squeezenet_v1_0/igie) | [✅](models/cv/classification/squeezenet_v1_0/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/squeezenet_v1_0/ixrt) | 4.2.0 | -| SqueezeNet 1.1 | FP16 | [✅](models/cv/classification/squeezenet_v1_1/igie) | [✅](models/cv/classification/squeezenet_v1_1/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/classification/squeezenet_v1_1/ixrt) | 4.2.0 | -| SVT Base | FP16 | [✅](models/cv/classification/svt_base/igie) | | 4.2.0 | -| Swin Transformer | FP16 | [✅](models/cv/classification/swin_transformer/igie) | | 4.2.0 | -| Swin Transformer Large | FP16 | | [✅](models/cv/classification/swin_transformer_large/ixrt) | 4.2.0 | -| Twins_PCPVT | FP16 | [✅](models/cv/classification/twins_pcpvt/igie) | | 4.2.0 | -| VAN_B0 | FP16 | [✅](models/cv/classification/van_b0/igie) | | 4.2.0 | -| VGG11 | FP16 | [✅](models/cv/classification/vgg11/igie) | | 4.2.0 | -| VGG16 | FP16 | [✅](models/cv/classification/vgg16/igie) | [✅](models/cv/classification/vgg16/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/vgg16/igie) | | 4.2.0 | -| VGG19 | FP16 | [✅](models/cv/classification/vgg19/igie) | | 4.2.0 | -| VGG19_BN | FP16 | [✅](models/cv/classification/vgg19_bn/igie) | | 4.2.0 | -| ViT | FP16 | [✅](models/cv/classification/vit/igie) | | 4.2.0 | -| Wide ResNet50 | FP16 | [✅](models/cv/classification/wide_resnet50/igie) | [✅](models/cv/classification/wide_resnet50/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/classification/wide_resnet50/igie) | [✅](models/cv/classification/wide_resnet50/ixrt) | 4.2.0 | -| Wide ResNet101 | FP16 | [✅](models/cv/classification/wide_resnet101/igie) | | 4.2.0 | +| RegNet_x_16gf | FP16 | [✅](models/cv/classification/regnet_x_16gf/igie) | | 4.3.0 | +| RegNet_x_1_6gf | FP16 | [✅](models/cv/classification/regnet_x_1_6gf/igie) | | 4.3.0 | +| RegNet_x_3_2gf | FP16 | [✅](models/cv/classification/regnet_x_3_2gf/igie) | | 4.3.0 | +| RegNet_y_1_6gf | FP16 | [✅](models/cv/classification/regnet_y_1_6gf/igie) | | 4.3.0 | +| RegNet_y_16gf | FP16 | [✅](models/cv/classification/regnet_y_16gf/igie) | | 4.3.0 | +| RepVGG | FP16 | [✅](models/cv/classification/repvgg/igie) | [✅](models/cv/classification/repvgg/ixrt) | 4.3.0 | +| Res2Net50 | FP16 | [✅](models/cv/classification/res2net50/igie) | [✅](models/cv/classification/res2net50/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/res2net50/ixrt) | 4.3.0 | +| ResNeSt50 | FP16 | [✅](models/cv/classification/resnest50/igie) | | 4.3.0 | +| ResNet101 | FP16 | [✅](models/cv/classification/resnet101/igie) | [✅](models/cv/classification/resnet101/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/resnet101/igie) | [✅](models/cv/classification/resnet101/ixrt) | 4.3.0 | +| ResNet152 | FP16 | [✅](models/cv/classification/resnet152/igie) | | 4.3.0 | +| | INT8 | [✅](models/cv/classification/resnet152/igie) | | 4.3.0 | +| ResNet18 | FP16 | [✅](models/cv/classification/resnet18/igie) | [✅](models/cv/classification/resnet18/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/resnet18/igie) | [✅](models/cv/classification/resnet18/ixrt) | 4.3.0 | +| ResNet34 | FP16 | | [✅](models/cv/classification/resnet34/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/resnet34/ixrt) | 4.3.0 | +| ResNet50 | FP16 | [✅](models/cv/classification/resnet50/igie) | [✅](models/cv/classification/resnet50/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/resnet50/igie) | | 4.3.0 | +| ResNetV1D50 | FP16 | [✅](models/cv/classification/resnetv1d50/igie) | [✅](models/cv/classification/resnetv1d50/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/resnetv1d50/ixrt) | 4.3.0 | +| ResNeXt50_32x4d | FP16 | [✅](models/cv/classification/resnext50_32x4d/igie) | [✅](models/cv/classification/resnext50_32x4d/ixrt) | 4.3.0 | +| ResNeXt101_64x4d | FP16 | [✅](models/cv/classification/resnext101_64x4d/igie) | [✅](models/cv/classification/resnext101_64x4d/ixrt) | 4.3.0 | +| ResNeXt101_32x8d | FP16 | [✅](models/cv/classification/resnext101_32x8d/igie) | [✅](models/cv/classification/resnext101_32x8d/ixrt) | 4.3.0 | +| SEResNet50 | FP16 | [✅](models/cv/classification/se_resnet50/igie) | | 4.3.0 | +| ShuffleNetV1 | FP16 | | [✅](models/cv/classification/shufflenet_v1/ixrt) | 4.3.0 | +| ShuffleNetV2_x0_5 | FP16 | [✅](models/cv/classification/shufflenetv2_x0_5/igie) | [✅](models/cv/classification/shufflenetv2_x0_5/ixrt) | 4.3.0 | +| ShuffleNetV2_x1_0 | FP16 | [✅](models/cv/classification/shufflenetv2_x1_0/igie) | [✅](models/cv/classification/shufflenetv2_x1_0/ixrt) | 4.3.0 | +| ShuffleNetV2_x1_5 | FP16 | [✅](models/cv/classification/shufflenetv2_x1_5/igie) | [✅](models/cv/classification/shufflenetv2_x1_5/ixrt) | 4.3.0 | +| ShuffleNetV2_x2_0 | FP16 | [✅](models/cv/classification/shufflenetv2_x2_0/igie) | [✅](models/cv/classification/shufflenetv2_x2_0/ixrt) | 4.3.0 | +| SqueezeNet 1.0 | FP16 | [✅](models/cv/classification/squeezenet_v1_0/igie) | [✅](models/cv/classification/squeezenet_v1_0/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/squeezenet_v1_0/ixrt) | 4.3.0 | +| SqueezeNet 1.1 | FP16 | [✅](models/cv/classification/squeezenet_v1_1/igie) | [✅](models/cv/classification/squeezenet_v1_1/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/classification/squeezenet_v1_1/ixrt) | 4.3.0 | +| SVT Base | FP16 | [✅](models/cv/classification/svt_base/igie) | | 4.3.0 | +| Swin Transformer | FP16 | [✅](models/cv/classification/swin_transformer/igie) | | 4.3.0 | +| Swin Transformer Large | FP16 | | [✅](models/cv/classification/swin_transformer_large/ixrt) | 4.3.0 | +| Twins_PCPVT | FP16 | [✅](models/cv/classification/twins_pcpvt/igie) | | 4.3.0 | +| VAN_B0 | FP16 | [✅](models/cv/classification/van_b0/igie) | | 4.3.0 | +| VGG11 | FP16 | [✅](models/cv/classification/vgg11/igie) | | 4.3.0 | +| VGG16 | FP16 | [✅](models/cv/classification/vgg16/igie) | [✅](models/cv/classification/vgg16/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/vgg16/igie) | | 4.3.0 | +| VGG19 | FP16 | [✅](models/cv/classification/vgg19/igie) | | 4.3.0 | +| VGG19_BN | FP16 | [✅](models/cv/classification/vgg19_bn/igie) | | 4.3.0 | +| ViT | FP16 | [✅](models/cv/classification/vit/igie) | | 4.3.0 | +| Wide ResNet50 | FP16 | [✅](models/cv/classification/wide_resnet50/igie) | [✅](models/cv/classification/wide_resnet50/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/classification/wide_resnet50/igie) | [✅](models/cv/classification/wide_resnet50/ixrt) | 4.3.0 | +| Wide ResNet101 | FP16 | [✅](models/cv/classification/wide_resnet101/igie) | | 4.3.0 | #### 目标检测 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |------------|-------|-------------------------------------------------|-------------------------------------------------|-----------| -| ATSS | FP16 | [✅](models/cv/object_detection/atss/igie) | | 4.2.0 | -| CenterNet | FP16 | [✅](models/cv/object_detection/centernet/igie) | [✅](models/cv/object_detection/centernet/ixrt) | 4.2.0 | -| DETR | FP16 | | [✅](models/cv/object_detection/detr/ixrt) | 4.2.0 | -| FCOS | FP16 | [✅](models/cv/object_detection/fcos/igie) | [✅](models/cv/object_detection/fcos/ixrt) | 4.2.0 | -| FoveaBox | FP16 | [✅](models/cv/object_detection/foveabox/igie) | [✅](models/cv/object_detection/foveabox/ixrt) | 4.2.0 | -| FSAF | FP16 | [✅](models/cv/object_detection/fsaf/igie) | [✅](models/cv/object_detection/fsaf/ixrt) | 4.2.0 | -| GFL | FP16 | [✅](models/cv/object_detection/gfl/igie) | | 4.2.0 | -| HRNet | FP16 | [✅](models/cv/object_detection/hrnet/igie) | [✅](models/cv/object_detection/hrnet/ixrt) | 4.2.0 | -| PAA | FP16 | [✅](models/cv/object_detection/paa/igie) | | 4.2.0 | -| RetinaFace | FP16 | [✅](models/cv/object_detection/retinaface/igie) | [✅](models/cv/object_detection/retinaface/ixrt) | 4.2.0 | -| RetinaNet | FP16 | [✅](models/cv/object_detection/retinanet/igie) | | 4.2.0 | -| RTMDet | FP16 | [✅](models/cv/object_detection/rtmdet/igie) | | 4.2.0 | -| SABL | FP16 | [✅](models/cv/object_detection/sabl/igie) | | 4.2.0 | -| YOLOv3 | FP16 | [✅](models/cv/object_detection/yolov3/igie) | [✅](models/cv/object_detection/yolov3/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/object_detection/yolov3/igie) | [✅](models/cv/object_detection/yolov3/ixrt) | 4.2.0 | -| YOLOv4 | FP16 | [✅](models/cv/object_detection/yolov4/igie) | [✅](models/cv/object_detection/yolov4/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/object_detection/yolov4/igie16) | [✅](models/cv/object_detection/yolov4/ixrt16) | 4.2.0 | -| YOLOv5 | FP16 | [✅](models/cv/object_detection/yolov5/igie) | [✅](models/cv/object_detection/yolov5/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/object_detection/yolov5/igie) | [✅](models/cv/object_detection/yolov5/ixrt) | 4.2.0 | -| YOLOv5s | FP16 | | [✅](models/cv/object_detection/yolov5s/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/object_detection/yolov5s/ixrt) | 4.2.0 | -| YOLOv6 | FP16 | [✅](models/cv/object_detection/yolov6/igie) | [✅](models/cv/object_detection/yolov6/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/object_detection/yolov6/ixrt) | 4.2.0 | -| YOLOv7 | FP16 | [✅](models/cv/object_detection/yolov7/igie) | [✅](models/cv/object_detection/yolov7/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/object_detection/yolov7/igie) | [✅](models/cv/object_detection/yolov7/ixrt) | 4.2.0 | -| YOLOv8 | FP16 | [✅](models/cv/object_detection/yolov8/igie) | [✅](models/cv/object_detection/yolov8/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/object_detection/yolov8/igie) | [✅](models/cv/object_detection/yolov8/ixrt) | 4.2.0 | -| YOLOv9 | FP16 | [✅](models/cv/object_detection/yolov9/igie) | [✅](models/cv/object_detection/yolov9/ixrt) | 4.2.0 | -| YOLOv10 | FP16 | [✅](models/cv/object_detection/yolov10/igie) | [✅](models/cv/object_detection/yolov10/ixrt) | 4.2.0 | -| YOLOv11 | FP16 | [✅](models/cv/object_detection/yolov11/igie) | [✅](models/cv/object_detection/yolov11/ixrt) | 4.2.0 | -| YOLOv12 | FP16 | [✅](models/cv/object_detection/yolov12/igie) | | 4.2.0 | -| YOLOX | FP16 | [✅](models/cv/object_detection/yolox/igie) | [✅](models/cv/object_detection/yolox/ixrt) | 4.2.0 | -| | INT8 | [✅](models/cv/object_detection/yolox/igie) | [✅](models/cv/object_detection/yolox/ixrt) | 4.2.0 | +| ATSS | FP16 | [✅](models/cv/object_detection/atss/igie) | | 4.3.0 | +| CenterNet | FP16 | [✅](models/cv/object_detection/centernet/igie) | [✅](models/cv/object_detection/centernet/ixrt) | 4.3.0 | +| DETR | FP16 | | [✅](models/cv/object_detection/detr/ixrt) | 4.3.0 | +| FCOS | FP16 | [✅](models/cv/object_detection/fcos/igie) | [✅](models/cv/object_detection/fcos/ixrt) | 4.3.0 | +| FoveaBox | FP16 | [✅](models/cv/object_detection/foveabox/igie) | [✅](models/cv/object_detection/foveabox/ixrt) | 4.3.0 | +| FSAF | FP16 | [✅](models/cv/object_detection/fsaf/igie) | [✅](models/cv/object_detection/fsaf/ixrt) | 4.3.0 | +| GFL | FP16 | [✅](models/cv/object_detection/gfl/igie) | | 4.3.0 | +| HRNet | FP16 | [✅](models/cv/object_detection/hrnet/igie) | [✅](models/cv/object_detection/hrnet/ixrt) | 4.3.0 | +| PAA | FP16 | [✅](models/cv/object_detection/paa/igie) | | 4.3.0 | +| RetinaFace | FP16 | [✅](models/cv/object_detection/retinaface/igie) | [✅](models/cv/object_detection/retinaface/ixrt) | 4.3.0 | +| RetinaNet | FP16 | [✅](models/cv/object_detection/retinanet/igie) | | 4.3.0 | +| RTMDet | FP16 | [✅](models/cv/object_detection/rtmdet/igie) | | 4.3.0 | +| SABL | FP16 | [✅](models/cv/object_detection/sabl/igie) | | 4.3.0 | +| YOLOv3 | FP16 | [✅](models/cv/object_detection/yolov3/igie) | [✅](models/cv/object_detection/yolov3/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/object_detection/yolov3/igie) | [✅](models/cv/object_detection/yolov3/ixrt) | 4.3.0 | +| YOLOv4 | FP16 | [✅](models/cv/object_detection/yolov4/igie) | [✅](models/cv/object_detection/yolov4/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/object_detection/yolov4/igie16) | [✅](models/cv/object_detection/yolov4/ixrt16) | 4.3.0 | +| YOLOv5 | FP16 | [✅](models/cv/object_detection/yolov5/igie) | [✅](models/cv/object_detection/yolov5/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/object_detection/yolov5/igie) | [✅](models/cv/object_detection/yolov5/ixrt) | 4.3.0 | +| YOLOv5s | FP16 | | [✅](models/cv/object_detection/yolov5s/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/object_detection/yolov5s/ixrt) | 4.3.0 | +| YOLOv6 | FP16 | [✅](models/cv/object_detection/yolov6/igie) | [✅](models/cv/object_detection/yolov6/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/object_detection/yolov6/ixrt) | 4.3.0 | +| YOLOv7 | FP16 | [✅](models/cv/object_detection/yolov7/igie) | [✅](models/cv/object_detection/yolov7/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/object_detection/yolov7/igie) | [✅](models/cv/object_detection/yolov7/ixrt) | 4.3.0 | +| YOLOv8 | FP16 | [✅](models/cv/object_detection/yolov8/igie) | [✅](models/cv/object_detection/yolov8/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/object_detection/yolov8/igie) | [✅](models/cv/object_detection/yolov8/ixrt) | 4.3.0 | +| YOLOv9 | FP16 | [✅](models/cv/object_detection/yolov9/igie) | [✅](models/cv/object_detection/yolov9/ixrt) | 4.3.0 | +| YOLOv10 | FP16 | [✅](models/cv/object_detection/yolov10/igie) | [✅](models/cv/object_detection/yolov10/ixrt) | 4.3.0 | +| YOLOv11 | FP16 | [✅](models/cv/object_detection/yolov11/igie) | [✅](models/cv/object_detection/yolov11/ixrt) | 4.3.0 | +| YOLOv12 | FP16 | [✅](models/cv/object_detection/yolov12/igie) | | 4.3.0 | +| YOLOX | FP16 | [✅](models/cv/object_detection/yolox/igie) | [✅](models/cv/object_detection/yolox/ixrt) | 4.3.0 | +| | INT8 | [✅](models/cv/object_detection/yolox/igie) | [✅](models/cv/object_detection/yolox/ixrt) | 4.3.0 | #### 人脸识别 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |---------|-------|------|----------------------------------------------|-----------| -| FaceNet | FP16 | | [✅](models/cv/face_recognition/facenet/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/cv/face_recognition/facenet/ixrt) | 4.2.0 | +| FaceNet | FP16 | | [✅](models/cv/face_recognition/facenet/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/cv/face_recognition/facenet/ixrt) | 4.3.0 | #### 光学字符识别(OCR) | Model | Prec. | IGIE | IXUCA SDK | |---------------|-------|---------------------------------------|-----------| -| Kie_layoutXLM | FP16 | [✅](models/cv/ocr/kie_layoutxlm/igie) | 4.2.0 | -| SVTR | FP16 | [✅](models/cv/ocr/svtr/igie) | 4.2.0 | +| Kie_layoutXLM | FP16 | [✅](models/cv/ocr/kie_layoutxlm/igie) | 4.3.0 | +| SVTR | FP16 | [✅](models/cv/ocr/svtr/igie) | 4.3.0 | #### 姿态估计 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |----------------------|-------|-----------------------------------------------|----------------------------------------------------------|-----------| -| HRNetPose | FP16 | [✅](models/cv/pose_estimation/hrnetpose/igie) | | 4.2.0 | -| Lightweight OpenPose | FP16 | | [✅](models/cv/pose_estimation/lightweight_openpose/ixrt) | 4.2.0 | -| RTMPose | FP16 | [✅](models/cv/pose_estimation/rtmpose/igie) | [✅](models/cv/pose_estimation/rtmpose/ixrt) | 4.2.0 | +| HRNetPose | FP16 | [✅](models/cv/pose_estimation/hrnetpose/igie) | | 4.3.0 | +| Lightweight OpenPose | FP16 | | [✅](models/cv/pose_estimation/lightweight_openpose/ixrt) | 4.3.0 | +| RTMPose | FP16 | [✅](models/cv/pose_estimation/rtmpose/igie) | [✅](models/cv/pose_estimation/rtmpose/ixrt) | 4.3.0 | #### 实例分割 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |------------|-------|------|-----------------------------------------------------|-----------| | Mask R-CNN | FP16 | | [✅](models/cv/instance_segmentation/mask_rcnn/ixrt) | 4.2.0 | -| SOLOv1 | FP16 | | [✅](models/cv/instance_segmentation/solov1/ixrt) | 4.2.0 | +| SOLOv1 | FP16 | | [✅](models/cv/instance_segmentation/solov1/ixrt) | 4.3.0 | #### 语义分割 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |-------|-------|------------------------------------------------|------|-----------| -| UNet | FP16 | [✅](models/cv/semantic_segmentation/unet/igie) | | 4.2.0 | +| UNet | FP16 | [✅](models/cv/semantic_segmentation/unet/igie) | | 4.3.0 | #### 多目标跟踪 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |---------------------|-------|----------------------------------------------------|------|-----------| -| FastReID | FP16 | [✅](models/cv/multi_object_tracking/fastreid/igie) | | 4.2.0 | -| DeepSort | FP16 | [✅](models/cv/multi_object_tracking/deepsort/igie) | | 4.2.0 | -| | INT8 | [✅](models/cv/multi_object_tracking/deepsort/igie) | | 4.2.0 | -| RepNet-Vehicle-ReID | FP16 | [✅](models/cv/multi_object_tracking/repnet/igie) | | 4.2.0 | +| FastReID | FP16 | [✅](models/cv/multi_object_tracking/fastreid/igie) | | 4.3.0 | +| DeepSort | FP16 | [✅](models/cv/multi_object_tracking/deepsort/igie) | | 4.3.0 | +| | INT8 | [✅](models/cv/multi_object_tracking/deepsort/igie) | | 4.3.0 | +| RepNet-Vehicle-ReID | FP16 | [✅](models/cv/multi_object_tracking/repnet/igie) | | 4.3.0 | ### 多模态 | Model | vLLM | IxFormer | IXUCA SDK | |---------------------|-----------------------------------------------------------------------|------------------------------------------------------------|-----------| -| Aria | [✅](models/multimodal/vision_language_model/aria/vllm) | | 4.2.0 | -| Chameleon-7B | [✅](models/multimodal/vision_language_model/chameleon_7b/vllm) | | 4.2.0 | -| CLIP | | [✅](models/multimodal/vision_language_model/clip/ixformer) | 4.2.0 | -| Fuyu-8B | [✅](models/multimodal/vision_language_model/fuyu_8b/vllm) | | 4.2.0 | -| H2OVL Mississippi | [✅](models/multimodal/vision_language_model/h2vol/vllm) | | 4.2.0 | -| Idefics3 | [✅](models/multimodal/vision_language_model/idefics3/vllm) | | 4.2.0 | -| InternVL2-4B | [✅](models/multimodal/vision_language_model/intern_vl/vllm) | | 4.2.0 | -| LLaVA | [✅](models/multimodal/vision_language_model/llava/vllm) | | 4.2.0 | -| LLaVA-Next-Video-7B | [✅](models/multimodal/vision_language_model/llava_next_video_7b/vllm) | | 4.2.0 | -| Llama-3.2 | [✅](models/multimodal/vision_language_model/llama-3.2/vllm) | | 4.2.0 | -| MiniCPM-V 2 | [✅](models/multimodal/vision_language_model/minicpm_v/vllm) | | 4.2.0 | -| Pixtral | [✅](models/multimodal/vision_language_model/pixtral/vllm) | | 4.2.0 | +| Aria | [✅](models/multimodal/vision_language_model/aria/vllm) | | 4.3.0 | +| Chameleon-7B | [✅](models/multimodal/vision_language_model/chameleon_7b/vllm) | | 4.3.0 | +| CLIP | | [✅](models/multimodal/vision_language_model/clip/ixformer) | 4.3.0 | +| Fuyu-8B | [✅](models/multimodal/vision_language_model/fuyu_8b/vllm) | | 4.3.0 | +| H2OVL Mississippi | [✅](models/multimodal/vision_language_model/h2vol/vllm) | | 4.3.0 | +| Idefics3 | [✅](models/multimodal/vision_language_model/idefics3/vllm) | | 4.3.0 | +| InternVL2-4B | [✅](models/multimodal/vision_language_model/intern_vl/vllm) | | 4.3.0 | +| LLaVA | [✅](models/multimodal/vision_language_model/llava/vllm) | | 4.3.0 | +| LLaVA-Next-Video-7B | [✅](models/multimodal/vision_language_model/llava_next_video_7b/vllm) | | 4.3.0 | +| Llama-3.2 | [✅](models/multimodal/vision_language_model/llama-3.2/vllm) | | 4.3.0 | +| MiniCPM-V 2 | [✅](models/multimodal/vision_language_model/minicpm_v/vllm) | | 4.3.0 | +| Pixtral | [✅](models/multimodal/vision_language_model/pixtral/vllm) | | 4.3.0 | ### 自然语言处理(NLP) @@ -255,15 +255,15 @@ DeepSparkInference将按季度进行版本更新,后续会逐步丰富模型 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |------------------|-------|-------------------------------------------|-------------------------------------------|-----------| -| ALBERT | FP16 | | [✅](models/nlp/plm/albert/ixrt) | 4.2.0 | -| BERT Base NER | INT8 | [✅](models/nlp/plm/bert_base_ner/igie) | | 4.2.0 | -| BERT Base SQuAD | FP16 | [✅](models/nlp/plm/bert_base_squad/igie) | [✅](models/nlp/plm/bert_base_squad/ixrt) | 4.2.0 | -| | INT8 | | [✅](models/nlp/plm/bert_base_squad/ixrt) | 4.2.0 | -| BERT Large SQuAD | FP16 | [✅](models/nlp/plm/bert_large_squad/igie) | [✅](models/nlp/plm/bert_large_squad/ixrt) | 4.2.0 | -| | INT8 | [✅](models/nlp/plm/bert_large_squad/igie) | [✅](models/nlp/plm/bert_large_squad/ixrt) | 4.2.0 | -| DeBERTa | FP16 | | [✅](models/nlp/plm/deberta/ixrt) | 4.2.0 | -| RoBERTa | FP16 | | [✅](models/nlp/plm/roberta/ixrt) | 4.2.0 | -| RoFormer | FP16 | | [✅](models/nlp/plm/roformer/ixrt) | 4.2.0 | +| ALBERT | FP16 | | [✅](models/nlp/plm/albert/ixrt) | 4.3.0 | +| BERT Base NER | INT8 | [✅](models/nlp/plm/bert_base_ner/igie) | | 4.3.0 | +| BERT Base SQuAD | FP16 | [✅](models/nlp/plm/bert_base_squad/igie) | [✅](models/nlp/plm/bert_base_squad/ixrt) | 4.3.0 | +| | INT8 | | [✅](models/nlp/plm/bert_base_squad/ixrt) | 4.3.0 | +| BERT Large SQuAD | FP16 | [✅](models/nlp/plm/bert_large_squad/igie) | [✅](models/nlp/plm/bert_large_squad/ixrt) | 4.3.0 | +| | INT8 | [✅](models/nlp/plm/bert_large_squad/igie) | [✅](models/nlp/plm/bert_large_squad/ixrt) | 4.3.0 | +| DeBERTa | FP16 | | [✅](models/nlp/plm/deberta/ixrt) | 4.3.0 | +| RoBERTa | FP16 | | [✅](models/nlp/plm/roberta/ixrt) | 4.3.0 | +| RoFormer | FP16 | | [✅](models/nlp/plm/roformer/ixrt) | 4.3.0 | | VideoBERT | FP16 | | [✅](models/nlp/plm/videobert/ixrt) | 4.2.0 | ### 语音 @@ -272,7 +272,7 @@ DeepSparkInference将按季度进行版本更新,后续会逐步丰富模型 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |-----------------|-------|-----------------------------------------------------|-----------------------------------------------------------|-----------| -| Conformer | FP16 | [✅](models/audio/speech_recognition/conformer/igie) | [✅](models/audio/speech_recognition/conformer/ixrt) | 4.2.0 | +| Conformer | FP16 | [✅](models/audio/speech_recognition/conformer/igie) | [✅](models/audio/speech_recognition/conformer/ixrt) | 4.3.0 | | Transformer ASR | FP16 | | [✅](models/audio/speech_recognition/transformer_asr/ixrt) | 4.2.0 | ### 其他 @@ -281,7 +281,7 @@ DeepSparkInference将按季度进行版本更新,后续会逐步丰富模型 | Model | Prec. | IGIE | IxRT | IXUCA SDK | |-------------|-------|------|------------------------------------------------------|-----------| -| Wide & Deep | FP16 | | [✅](models/others/recommendation/wide_and_deep/ixrt) | 4.2.0 | +| Wide & Deep | FP16 | | [✅](models/others/recommendation/wide_and_deep/ixrt) | 4.3.0 | --- diff --git a/models/audio/speech_recognition/conformer/igie/README.md b/models/audio/speech_recognition/conformer/igie/README.md index ae96f9d4..585d70d0 100644 --- a/models/audio/speech_recognition/conformer/igie/README.md +++ b/models/audio/speech_recognition/conformer/igie/README.md @@ -11,7 +11,8 @@ Conformer applies convolution to the Encoder layer of Transformer, enhancing the | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/audio/speech_recognition/conformer/ixrt/README.md b/models/audio/speech_recognition/conformer/ixrt/README.md index d73a68de..56ea26cc 100644 --- a/models/audio/speech_recognition/conformer/ixrt/README.md +++ b/models/audio/speech_recognition/conformer/ixrt/README.md @@ -8,7 +8,8 @@ Conformer is a speech recognition model proposed by Google in 2020. It combines | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/alexnet/igie/README.md b/models/cv/classification/alexnet/igie/README.md index 1c69881e..c1f779cb 100644 --- a/models/cv/classification/alexnet/igie/README.md +++ b/models/cv/classification/alexnet/igie/README.md @@ -12,7 +12,8 @@ non-linearity, allowing the model to learn complex features from input images. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/alexnet/ixrt/README.md b/models/cv/classification/alexnet/ixrt/README.md index 723c2145..34b11957 100644 --- a/models/cv/classification/alexnet/ixrt/README.md +++ b/models/cv/classification/alexnet/ixrt/README.md @@ -9,7 +9,8 @@ layers as the basic building blocks. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/clip/igie/README.md b/models/cv/classification/clip/igie/README.md index 23bdeb48..8460c763 100644 --- a/models/cv/classification/clip/igie/README.md +++ b/models/cv/classification/clip/igie/README.md @@ -8,7 +8,8 @@ CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/conformer_base/igie/README.md b/models/cv/classification/conformer_base/igie/README.md index cb81979d..d05a8756 100644 --- a/models/cv/classification/conformer_base/igie/README.md +++ b/models/cv/classification/conformer_base/igie/README.md @@ -8,7 +8,8 @@ Conformer is a novel network architecture that addresses the limitations of conv | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/convnext_base/igie/README.md b/models/cv/classification/convnext_base/igie/README.md index 2a60872f..c0774fcf 100644 --- a/models/cv/classification/convnext_base/igie/README.md +++ b/models/cv/classification/convnext_base/igie/README.md @@ -8,7 +8,8 @@ The ConvNeXt Base model represents a significant stride in the evolution of conv | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/convnext_base/ixrt/README.md b/models/cv/classification/convnext_base/ixrt/README.md index b90a29d1..9dfc874f 100644 --- a/models/cv/classification/convnext_base/ixrt/README.md +++ b/models/cv/classification/convnext_base/ixrt/README.md @@ -8,7 +8,8 @@ The ConvNeXt Base model represents a significant stride in the evolution of conv | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/convnext_s/igie/README.md b/models/cv/classification/convnext_s/igie/README.md index 9222a959..34371ae5 100644 --- a/models/cv/classification/convnext_s/igie/README.md +++ b/models/cv/classification/convnext_s/igie/README.md @@ -8,7 +8,8 @@ ConvNeXt-S is a small-sized model in the ConvNeXt family, designed to balance pe | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/convnext_small/igie/README.md b/models/cv/classification/convnext_small/igie/README.md index c665dcb6..65edf23d 100644 --- a/models/cv/classification/convnext_small/igie/README.md +++ b/models/cv/classification/convnext_small/igie/README.md @@ -8,7 +8,8 @@ The ConvNeXt Small model represents a significant stride in the evolution of con | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/convnext_small/ixrt/README.md b/models/cv/classification/convnext_small/ixrt/README.md index 9d2d4d35..8f216b60 100644 --- a/models/cv/classification/convnext_small/ixrt/README.md +++ b/models/cv/classification/convnext_small/ixrt/README.md @@ -8,7 +8,8 @@ The ConvNeXt Small model represents a significant stride in the evolution of con | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/convnext_tiny/igie/README.md b/models/cv/classification/convnext_tiny/igie/README.md index 3d083135..a4427ceb 100644 --- a/models/cv/classification/convnext_tiny/igie/README.md +++ b/models/cv/classification/convnext_tiny/igie/README.md @@ -8,7 +8,8 @@ ConvNeXt is a modern convolutional neural network architecture proposed by Faceb | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/cspdarknet53/igie/README.md b/models/cv/classification/cspdarknet53/igie/README.md index 07da984c..50d69041 100644 --- a/models/cv/classification/cspdarknet53/igie/README.md +++ b/models/cv/classification/cspdarknet53/igie/README.md @@ -8,7 +8,8 @@ CSPDarkNet53 is an enhanced convolutional neural network architecture that reduc | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/cspdarknet53/ixrt/README.md b/models/cv/classification/cspdarknet53/ixrt/README.md index 1cc98a0e..861860d8 100644 --- a/models/cv/classification/cspdarknet53/ixrt/README.md +++ b/models/cv/classification/cspdarknet53/ixrt/README.md @@ -8,7 +8,8 @@ CSPDarkNet53 is an enhanced convolutional neural network architecture that reduc | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/cspresnet50/igie/README.md b/models/cv/classification/cspresnet50/igie/README.md index 5c01bbd1..41da2145 100644 --- a/models/cv/classification/cspresnet50/igie/README.md +++ b/models/cv/classification/cspresnet50/igie/README.md @@ -10,7 +10,8 @@ computations, optimizes gradient flow, and enhances feature representation. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/cspresnet50/ixrt/README.md b/models/cv/classification/cspresnet50/ixrt/README.md index 9d8d5f18..01bed75f 100644 --- a/models/cv/classification/cspresnet50/ixrt/README.md +++ b/models/cv/classification/cspresnet50/ixrt/README.md @@ -9,7 +9,8 @@ CSPResNet50 is the one of best models. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/cspresnext50/igie/README.md b/models/cv/classification/cspresnext50/igie/README.md index 7a8ddc1e..0397bfa3 100644 --- a/models/cv/classification/cspresnext50/igie/README.md +++ b/models/cv/classification/cspresnext50/igie/README.md @@ -8,7 +8,8 @@ CSPResNeXt50 is a convolutional neural network that combines the CSPNet and ResN | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/deit_tiny/igie/README.md b/models/cv/classification/deit_tiny/igie/README.md index 439a3cc7..82215f98 100644 --- a/models/cv/classification/deit_tiny/igie/README.md +++ b/models/cv/classification/deit_tiny/igie/README.md @@ -8,7 +8,8 @@ DeiT Tiny is a lightweight vision transformer designed for data-efficient learni | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/deit_tiny/ixrt/README.md b/models/cv/classification/deit_tiny/ixrt/README.md index b1874b6a..5f5a92e9 100644 --- a/models/cv/classification/deit_tiny/ixrt/README.md +++ b/models/cv/classification/deit_tiny/ixrt/README.md @@ -8,7 +8,8 @@ DeiT Tiny is a lightweight vision transformer designed for data-efficient learni | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/densenet121/igie/README.md b/models/cv/classification/densenet121/igie/README.md index 1637a25f..dc037765 100644 --- a/models/cv/classification/densenet121/igie/README.md +++ b/models/cv/classification/densenet121/igie/README.md @@ -8,7 +8,8 @@ DenseNet-121 is a convolutional neural network architecture that belongs to the | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/densenet121/ixrt/README.md b/models/cv/classification/densenet121/ixrt/README.md index a5dbc7c7..cf204af8 100644 --- a/models/cv/classification/densenet121/ixrt/README.md +++ b/models/cv/classification/densenet121/ixrt/README.md @@ -8,7 +8,8 @@ Dense Convolutional Network (DenseNet), connects each layer to every other layer | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/densenet161/igie/README.md b/models/cv/classification/densenet161/igie/README.md index c2f5a294..9ecec725 100644 --- a/models/cv/classification/densenet161/igie/README.md +++ b/models/cv/classification/densenet161/igie/README.md @@ -8,7 +8,8 @@ DenseNet161 is a convolutional neural network architecture that belongs to the f | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/densenet161/ixrt/README.md b/models/cv/classification/densenet161/ixrt/README.md index fc6f1877..294d2d3c 100644 --- a/models/cv/classification/densenet161/ixrt/README.md +++ b/models/cv/classification/densenet161/ixrt/README.md @@ -8,7 +8,8 @@ DenseNet161 is a convolutional neural network architecture that belongs to the f | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/densenet169/igie/README.md b/models/cv/classification/densenet169/igie/README.md index 5e961ca4..fa6f9807 100644 --- a/models/cv/classification/densenet169/igie/README.md +++ b/models/cv/classification/densenet169/igie/README.md @@ -8,7 +8,8 @@ DenseNet-169 is a variant of the Dense Convolutional Network (DenseNet) architec | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/densenet169/ixrt/README.md b/models/cv/classification/densenet169/ixrt/README.md index 66289e6c..c105e417 100644 --- a/models/cv/classification/densenet169/ixrt/README.md +++ b/models/cv/classification/densenet169/ixrt/README.md @@ -8,7 +8,8 @@ Dense Convolutional Network (DenseNet), connects each layer to every other layer | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/densenet201/igie/README.md b/models/cv/classification/densenet201/igie/README.md index fc54b25b..8040ad6e 100644 --- a/models/cv/classification/densenet201/igie/README.md +++ b/models/cv/classification/densenet201/igie/README.md @@ -8,7 +8,8 @@ DenseNet201 is a deep convolutional neural network that stands out for its uniqu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/densenet201/ixrt/README.md b/models/cv/classification/densenet201/ixrt/README.md index f394306c..7b9810b2 100644 --- a/models/cv/classification/densenet201/ixrt/README.md +++ b/models/cv/classification/densenet201/ixrt/README.md @@ -8,7 +8,8 @@ DenseNet201 is a deep convolutional neural network that stands out for its uniqu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b0/igie/README.md b/models/cv/classification/efficientnet_b0/igie/README.md index 413032a8..60bb5373 100644 --- a/models/cv/classification/efficientnet_b0/igie/README.md +++ b/models/cv/classification/efficientnet_b0/igie/README.md @@ -8,7 +8,8 @@ EfficientNet-B0 is a lightweight yet highly efficient convolutional neural netwo | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b0/ixrt/README.md b/models/cv/classification/efficientnet_b0/ixrt/README.md index 2594d622..7606d841 100644 --- a/models/cv/classification/efficientnet_b0/ixrt/README.md +++ b/models/cv/classification/efficientnet_b0/ixrt/README.md @@ -8,7 +8,8 @@ EfficientNet B0 is a convolutional neural network architecture that belongs to t | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b1/igie/README.md b/models/cv/classification/efficientnet_b1/igie/README.md index 89e72b3b..f3692795 100644 --- a/models/cv/classification/efficientnet_b1/igie/README.md +++ b/models/cv/classification/efficientnet_b1/igie/README.md @@ -8,7 +8,8 @@ EfficientNet B1 is a convolutional neural network architecture that falls under | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b1/ixrt/README.md b/models/cv/classification/efficientnet_b1/ixrt/README.md index 884bf2a9..dbad178f 100644 --- a/models/cv/classification/efficientnet_b1/ixrt/README.md +++ b/models/cv/classification/efficientnet_b1/ixrt/README.md @@ -8,7 +8,8 @@ EfficientNet B1 is one of the variants in the EfficientNet family of neural netw | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b2/igie/README.md b/models/cv/classification/efficientnet_b2/igie/README.md index fab2353b..f7acca02 100644 --- a/models/cv/classification/efficientnet_b2/igie/README.md +++ b/models/cv/classification/efficientnet_b2/igie/README.md @@ -8,7 +8,8 @@ EfficientNet B2 is a member of the EfficientNet family, a series of convolutiona | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b2/ixrt/README.md b/models/cv/classification/efficientnet_b2/ixrt/README.md index 510c77ba..80d95edf 100644 --- a/models/cv/classification/efficientnet_b2/ixrt/README.md +++ b/models/cv/classification/efficientnet_b2/ixrt/README.md @@ -8,7 +8,8 @@ EfficientNet B2 is a member of the EfficientNet family, a series of convolutiona | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b3/igie/README.md b/models/cv/classification/efficientnet_b3/igie/README.md index 44c0fd3e..cf57fcb8 100644 --- a/models/cv/classification/efficientnet_b3/igie/README.md +++ b/models/cv/classification/efficientnet_b3/igie/README.md @@ -8,7 +8,8 @@ EfficientNet B3 is a member of the EfficientNet family, a series of convolutiona | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b3/ixrt/README.md b/models/cv/classification/efficientnet_b3/ixrt/README.md index 345086ab..fd312942 100644 --- a/models/cv/classification/efficientnet_b3/ixrt/README.md +++ b/models/cv/classification/efficientnet_b3/ixrt/README.md @@ -8,7 +8,8 @@ EfficientNet B3 is a member of the EfficientNet family, a series of convolutiona | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b4/igie/README.md b/models/cv/classification/efficientnet_b4/igie/README.md index 68a12a6a..74111278 100644 --- a/models/cv/classification/efficientnet_b4/igie/README.md +++ b/models/cv/classification/efficientnet_b4/igie/README.md @@ -8,7 +8,8 @@ EfficientNet B4 is a high-performance convolutional neural network model introdu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_b5/igie/README.md b/models/cv/classification/efficientnet_b5/igie/README.md index e2a626bf..03e21de4 100644 --- a/models/cv/classification/efficientnet_b5/igie/README.md +++ b/models/cv/classification/efficientnet_b5/igie/README.md @@ -8,7 +8,8 @@ EfficientNet B5 is an efficient convolutional network model designed through a c | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_v2/igie/README.md b/models/cv/classification/efficientnet_v2/igie/README.md index 752160a5..7a9f3510 100644 --- a/models/cv/classification/efficientnet_v2/igie/README.md +++ b/models/cv/classification/efficientnet_v2/igie/README.md @@ -8,7 +8,8 @@ EfficientNetV2 M is an optimized model in the EfficientNetV2 series, which was d | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_v2/ixrt/README.md b/models/cv/classification/efficientnet_v2/ixrt/README.md index 3742df78..cdcf5de8 100755 --- a/models/cv/classification/efficientnet_v2/ixrt/README.md +++ b/models/cv/classification/efficientnet_v2/ixrt/README.md @@ -10,7 +10,8 @@ incorporates a series of enhancement strategies to further boost performance. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_v2_s/igie/README.md b/models/cv/classification/efficientnet_v2_s/igie/README.md index 8a8fa2fa..3173a515 100644 --- a/models/cv/classification/efficientnet_v2_s/igie/README.md +++ b/models/cv/classification/efficientnet_v2_s/igie/README.md @@ -8,7 +8,8 @@ EfficientNetV2 S is an optimized model in the EfficientNetV2 series, which was d | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnet_v2_s/ixrt/README.md b/models/cv/classification/efficientnet_v2_s/ixrt/README.md index 171fed94..bf9a90ee 100644 --- a/models/cv/classification/efficientnet_v2_s/ixrt/README.md +++ b/models/cv/classification/efficientnet_v2_s/ixrt/README.md @@ -8,7 +8,8 @@ EfficientNetV2 S is an optimized model in the EfficientNetV2 series, which was d | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnetv2_rw_t/igie/README.md b/models/cv/classification/efficientnetv2_rw_t/igie/README.md index 3239cf65..393d38c7 100644 --- a/models/cv/classification/efficientnetv2_rw_t/igie/README.md +++ b/models/cv/classification/efficientnetv2_rw_t/igie/README.md @@ -8,7 +8,8 @@ EfficientNetV2_rw_t is an enhanced version of the EfficientNet family of convolu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/efficientnetv2_rw_t/ixrt/README.md b/models/cv/classification/efficientnetv2_rw_t/ixrt/README.md index 8d17a94c..b97f0dd1 100644 --- a/models/cv/classification/efficientnetv2_rw_t/ixrt/README.md +++ b/models/cv/classification/efficientnetv2_rw_t/ixrt/README.md @@ -8,7 +8,8 @@ EfficientNetV2_rw_t is an enhanced version of the EfficientNet family of convolu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/googlenet/igie/README.md b/models/cv/classification/googlenet/igie/README.md index 05a5df13..f822a231 100644 --- a/models/cv/classification/googlenet/igie/README.md +++ b/models/cv/classification/googlenet/igie/README.md @@ -8,7 +8,8 @@ Introduced in 2014, GoogleNet revolutionized image classification models by intr | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/googlenet/ixrt/README.md b/models/cv/classification/googlenet/ixrt/README.md index 4b2f0935..252bc958 100644 --- a/models/cv/classification/googlenet/ixrt/README.md +++ b/models/cv/classification/googlenet/ixrt/README.md @@ -8,7 +8,8 @@ GoogLeNet is a type of convolutional neural network based on the Inception archi | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/hrnet_w18/igie/README.md b/models/cv/classification/hrnet_w18/igie/README.md index e69d5840..c7aae72b 100644 --- a/models/cv/classification/hrnet_w18/igie/README.md +++ b/models/cv/classification/hrnet_w18/igie/README.md @@ -8,7 +8,8 @@ HRNet, short for High-Resolution Network, presents a paradigm shift in handling | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/hrnet_w18/ixrt/README.md b/models/cv/classification/hrnet_w18/ixrt/README.md index c4d4bb1e..0d121c8b 100644 --- a/models/cv/classification/hrnet_w18/ixrt/README.md +++ b/models/cv/classification/hrnet_w18/ixrt/README.md @@ -8,7 +8,8 @@ HRNet-W18 is a powerful image classification model developed by Jingdong AI Rese | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/inception_resnet_v2/ixrt/README.md b/models/cv/classification/inception_resnet_v2/ixrt/README.md index d288522e..60a8c5f5 100755 --- a/models/cv/classification/inception_resnet_v2/ixrt/README.md +++ b/models/cv/classification/inception_resnet_v2/ixrt/README.md @@ -8,7 +8,8 @@ Inception-ResNet-V2 is a deep learning model proposed by Google in 2016, which c | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/inception_v3/igie/README.md b/models/cv/classification/inception_v3/igie/README.md index c04e865f..fea4028a 100644 --- a/models/cv/classification/inception_v3/igie/README.md +++ b/models/cv/classification/inception_v3/igie/README.md @@ -8,7 +8,8 @@ Inception v3 is a convolutional neural network architecture designed for image r | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/inception_v3/ixrt/README.md b/models/cv/classification/inception_v3/ixrt/README.md index 8807f231..5fd218df 100755 --- a/models/cv/classification/inception_v3/ixrt/README.md +++ b/models/cv/classification/inception_v3/ixrt/README.md @@ -8,7 +8,8 @@ Inception v3 is a convolutional neural network architecture designed for image r | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/mlp_mixer_base/igie/README.md b/models/cv/classification/mlp_mixer_base/igie/README.md index de3fadf4..69d89ac0 100644 --- a/models/cv/classification/mlp_mixer_base/igie/README.md +++ b/models/cv/classification/mlp_mixer_base/igie/README.md @@ -8,7 +8,8 @@ MLP-Mixer Base is a foundational model in the MLP-Mixer family, designed to use | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/mnasnet0_5/igie/README.md b/models/cv/classification/mnasnet0_5/igie/README.md index 4847f2ce..5c006721 100644 --- a/models/cv/classification/mnasnet0_5/igie/README.md +++ b/models/cv/classification/mnasnet0_5/igie/README.md @@ -8,7 +8,8 @@ MNASNet0_5 is a neural network architecture optimized for mobile devices, design | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/mnasnet0_75/igie/README.md b/models/cv/classification/mnasnet0_75/igie/README.md index 12bf5601..5893fc9c 100644 --- a/models/cv/classification/mnasnet0_75/igie/README.md +++ b/models/cv/classification/mnasnet0_75/igie/README.md @@ -8,7 +8,8 @@ MNASNet0_75 is a lightweight convolutional neural network designed for mobile de | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/mnasnet1_0/igie/README.md b/models/cv/classification/mnasnet1_0/igie/README.md index cde6a033..65e2c47a 100644 --- a/models/cv/classification/mnasnet1_0/igie/README.md +++ b/models/cv/classification/mnasnet1_0/igie/README.md @@ -8,7 +8,8 @@ | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/mobilenet_v2/igie/README.md b/models/cv/classification/mobilenet_v2/igie/README.md index ee928c6e..4b833dcc 100644 --- a/models/cv/classification/mobilenet_v2/igie/README.md +++ b/models/cv/classification/mobilenet_v2/igie/README.md @@ -8,7 +8,8 @@ MobileNetV2 is an improvement on V1. Its new ideas include Linear Bottleneck and | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/mobilenet_v2/ixrt/README.md b/models/cv/classification/mobilenet_v2/ixrt/README.md index f702504c..e6c658cb 100644 --- a/models/cv/classification/mobilenet_v2/ixrt/README.md +++ b/models/cv/classification/mobilenet_v2/ixrt/README.md @@ -8,7 +8,8 @@ The MobileNetV2 architecture is based on an inverted residual structure where th | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/mobilenet_v3/igie/README.md b/models/cv/classification/mobilenet_v3/igie/README.md index a85288d9..ca611307 100644 --- a/models/cv/classification/mobilenet_v3/igie/README.md +++ b/models/cv/classification/mobilenet_v3/igie/README.md @@ -8,7 +8,8 @@ MobileNetV3_Small is a lightweight convolutional neural network architecture des | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/mobilenet_v3/ixrt/README.md b/models/cv/classification/mobilenet_v3/ixrt/README.md index 6827ec69..149bed83 100644 --- a/models/cv/classification/mobilenet_v3/ixrt/README.md +++ b/models/cv/classification/mobilenet_v3/ixrt/README.md @@ -8,7 +8,8 @@ MobileNetV3 is a convolutional neural network that is tuned to mobile phone CPUs | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/mobilenet_v3_large/igie/README.md b/models/cv/classification/mobilenet_v3_large/igie/README.md index 116b9169..df08d4fb 100644 --- a/models/cv/classification/mobilenet_v3_large/igie/README.md +++ b/models/cv/classification/mobilenet_v3_large/igie/README.md @@ -8,7 +8,8 @@ MobileNetV3_Large builds upon the success of its predecessors by incorporating s | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/regnet_x_16gf/igie/README.md b/models/cv/classification/regnet_x_16gf/igie/README.md index a2cc24e5..0b037c5c 100644 --- a/models/cv/classification/regnet_x_16gf/igie/README.md +++ b/models/cv/classification/regnet_x_16gf/igie/README.md @@ -9,7 +9,8 @@ RegNet_x_16gf is a deep convolutional neural network from the RegNet family, int | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/regnet_x_1_6gf/igie/README.md b/models/cv/classification/regnet_x_1_6gf/igie/README.md index 73aaa51e..81b7e39d 100644 --- a/models/cv/classification/regnet_x_1_6gf/igie/README.md +++ b/models/cv/classification/regnet_x_1_6gf/igie/README.md @@ -8,7 +8,8 @@ RegNet is a family of models designed for image classification tasks, as describ | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/regnet_x_3_2gf/igie/README.md b/models/cv/classification/regnet_x_3_2gf/igie/README.md index 8290fbab..1875b102 100644 --- a/models/cv/classification/regnet_x_3_2gf/igie/README.md +++ b/models/cv/classification/regnet_x_3_2gf/igie/README.md @@ -8,7 +8,8 @@ RegNet_x_3_2gf is a model from the RegNet series, inspired by the paper *Designi | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/regnet_y_16gf/igie/README.md b/models/cv/classification/regnet_y_16gf/igie/README.md index a9c573f8..be7585b6 100644 --- a/models/cv/classification/regnet_y_16gf/igie/README.md +++ b/models/cv/classification/regnet_y_16gf/igie/README.md @@ -9,7 +9,8 @@ RegNet_y_16gf is an efficient convolutional neural network model in the RegNet f | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/regnet_y_1_6gf/igie/README.md b/models/cv/classification/regnet_y_1_6gf/igie/README.md index 2151fea7..8fc8fc99 100644 --- a/models/cv/classification/regnet_y_1_6gf/igie/README.md +++ b/models/cv/classification/regnet_y_1_6gf/igie/README.md @@ -8,7 +8,8 @@ RegNet is a family of models designed for image classification tasks, as describ | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/repvgg/igie/README.md b/models/cv/classification/repvgg/igie/README.md index 6ee9e4f4..9e4368ff 100644 --- a/models/cv/classification/repvgg/igie/README.md +++ b/models/cv/classification/repvgg/igie/README.md @@ -8,7 +8,8 @@ RepVGG is an innovative convolutional neural network architecture that combines | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/repvgg/ixrt/README.md b/models/cv/classification/repvgg/ixrt/README.md index 8ec55d1d..897b3375 100644 --- a/models/cv/classification/repvgg/ixrt/README.md +++ b/models/cv/classification/repvgg/ixrt/README.md @@ -9,7 +9,8 @@ It was developed by researchers at the University of Oxford and introduced in th | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/res2net50/igie/README.md b/models/cv/classification/res2net50/igie/README.md index 71f6bd95..fc857f03 100644 --- a/models/cv/classification/res2net50/igie/README.md +++ b/models/cv/classification/res2net50/igie/README.md @@ -8,7 +8,8 @@ Res2Net50 is a convolutional neural network architecture that introduces the con | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/res2net50/ixrt/README.md b/models/cv/classification/res2net50/ixrt/README.md index fc0c52be..b0ce62ca 100644 --- a/models/cv/classification/res2net50/ixrt/README.md +++ b/models/cv/classification/res2net50/ixrt/README.md @@ -8,7 +8,8 @@ A novel building block for CNNs, namely Res2Net, by constructing hierarchical re | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnest50/igie/README.md b/models/cv/classification/resnest50/igie/README.md index c2312862..fdf969b6 100644 --- a/models/cv/classification/resnest50/igie/README.md +++ b/models/cv/classification/resnest50/igie/README.md @@ -8,7 +8,8 @@ ResNeSt50 is a deep convolutional neural network model based on the ResNeSt arch | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnet101/igie/README.md b/models/cv/classification/resnet101/igie/README.md index 762ade7f..94d285ea 100644 --- a/models/cv/classification/resnet101/igie/README.md +++ b/models/cv/classification/resnet101/igie/README.md @@ -8,7 +8,8 @@ ResNet101 is a convolutional neural network architecture that belongs to the Res | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnet101/ixrt/README.md b/models/cv/classification/resnet101/ixrt/README.md index e1418487..8b91c5b5 100644 --- a/models/cv/classification/resnet101/ixrt/README.md +++ b/models/cv/classification/resnet101/ixrt/README.md @@ -8,7 +8,8 @@ ResNet-101 is a variant of the ResNet (Residual Network) architecture, and it be | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnet152/igie/README.md b/models/cv/classification/resnet152/igie/README.md index 5f83f33a..0aed19d5 100644 --- a/models/cv/classification/resnet152/igie/README.md +++ b/models/cv/classification/resnet152/igie/README.md @@ -8,7 +8,8 @@ ResNet152 is a convolutional neural network architecture that is part of the Res | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnet18/igie/README.md b/models/cv/classification/resnet18/igie/README.md index 85adc79d..f3ef4f56 100644 --- a/models/cv/classification/resnet18/igie/README.md +++ b/models/cv/classification/resnet18/igie/README.md @@ -8,7 +8,8 @@ ResNet-18 is a relatively compact deep neural network.The ResNet-18 architecture | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnet18/ixrt/README.md b/models/cv/classification/resnet18/ixrt/README.md index 70d3786b..bfb0d4b3 100644 --- a/models/cv/classification/resnet18/ixrt/README.md +++ b/models/cv/classification/resnet18/ixrt/README.md @@ -8,7 +8,8 @@ ResNet-18 is a variant of the ResNet (Residual Network) architecture, which was | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnet34/ixrt/README.md b/models/cv/classification/resnet34/ixrt/README.md index 3b7956bb..fc63548e 100644 --- a/models/cv/classification/resnet34/ixrt/README.md +++ b/models/cv/classification/resnet34/ixrt/README.md @@ -8,7 +8,8 @@ Residual Networks, or ResNets, learn residual functions with reference to the la | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnet50/igie/README.md b/models/cv/classification/resnet50/igie/README.md index 55fb8db2..c6f43bf6 100644 --- a/models/cv/classification/resnet50/igie/README.md +++ b/models/cv/classification/resnet50/igie/README.md @@ -8,7 +8,8 @@ ResNet-50 is a convolutional neural network architecture that belongs to the Res | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnet50/ixrt/README.md b/models/cv/classification/resnet50/ixrt/README.md index f9db5f1f..c4edfe1d 100644 --- a/models/cv/classification/resnet50/ixrt/README.md +++ b/models/cv/classification/resnet50/ixrt/README.md @@ -8,7 +8,8 @@ Residual Networks, or ResNets, learn residual functions with reference to the la | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnetv1d50/igie/README.md b/models/cv/classification/resnetv1d50/igie/README.md index 52d32a70..b65e89d3 100644 --- a/models/cv/classification/resnetv1d50/igie/README.md +++ b/models/cv/classification/resnetv1d50/igie/README.md @@ -8,7 +8,8 @@ ResNetV1D50 is an enhanced version of ResNetV1-50 that incorporates changes like | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnetv1d50/ixrt/README.md b/models/cv/classification/resnetv1d50/ixrt/README.md index 9a8d945d..5f85b4e1 100644 --- a/models/cv/classification/resnetv1d50/ixrt/README.md +++ b/models/cv/classification/resnetv1d50/ixrt/README.md @@ -8,7 +8,8 @@ Residual Networks, or ResNets, learn residual functions with reference to the la | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnext101_32x8d/igie/README.md b/models/cv/classification/resnext101_32x8d/igie/README.md index d2e6b25d..5215d528 100644 --- a/models/cv/classification/resnext101_32x8d/igie/README.md +++ b/models/cv/classification/resnext101_32x8d/igie/README.md @@ -8,7 +8,8 @@ ResNeXt101_32x8d is a deep convolutional neural network introduced in the paper | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnext101_32x8d/ixrt/README.md b/models/cv/classification/resnext101_32x8d/ixrt/README.md index 5859d915..84a63be8 100644 --- a/models/cv/classification/resnext101_32x8d/ixrt/README.md +++ b/models/cv/classification/resnext101_32x8d/ixrt/README.md @@ -12,7 +12,8 @@ This design improves feature extraction while maintaining computational efficien | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/resnext101_64x4d/igie/README.md b/models/cv/classification/resnext101_64x4d/igie/README.md index 22ff3449..468eff2d 100644 --- a/models/cv/classification/resnext101_64x4d/igie/README.md +++ b/models/cv/classification/resnext101_64x4d/igie/README.md @@ -8,7 +8,8 @@ The ResNeXt101_64x4d is a deep learning model based on the deep residual network | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnext101_64x4d/ixrt/README.md b/models/cv/classification/resnext101_64x4d/ixrt/README.md index cc647490..503d2847 100644 --- a/models/cv/classification/resnext101_64x4d/ixrt/README.md +++ b/models/cv/classification/resnext101_64x4d/ixrt/README.md @@ -11,7 +11,8 @@ various input sizes | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnext50_32x4d/igie/README.md b/models/cv/classification/resnext50_32x4d/igie/README.md index 5fb0b554..1cf64e6f 100644 --- a/models/cv/classification/resnext50_32x4d/igie/README.md +++ b/models/cv/classification/resnext50_32x4d/igie/README.md @@ -8,7 +8,8 @@ The ResNeXt50_32x4d model is a convolutional neural network architecture designe | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/resnext50_32x4d/ixrt/README.md b/models/cv/classification/resnext50_32x4d/ixrt/README.md index 75e4bd20..da346292 100644 --- a/models/cv/classification/resnext50_32x4d/ixrt/README.md +++ b/models/cv/classification/resnext50_32x4d/ixrt/README.md @@ -8,7 +8,8 @@ The ResNeXt50_32x4d model is a convolutional neural network architecture designe | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/seresnet50/igie/README.md b/models/cv/classification/seresnet50/igie/README.md index 3642b2e5..46b63f84 100644 --- a/models/cv/classification/seresnet50/igie/README.md +++ b/models/cv/classification/seresnet50/igie/README.md @@ -8,7 +8,8 @@ SEResNet50 is an enhanced version of the ResNet50 network integrated with Squeez | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/shufflenet_v1/ixrt/README.md b/models/cv/classification/shufflenet_v1/ixrt/README.md index fa020373..9417776f 100644 --- a/models/cv/classification/shufflenet_v1/ixrt/README.md +++ b/models/cv/classification/shufflenet_v1/ixrt/README.md @@ -9,7 +9,8 @@ It uses techniques such as deep separable convolution and channel shuffle to red | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/shufflenetv2_x0_5/igie/README.md b/models/cv/classification/shufflenetv2_x0_5/igie/README.md index d5e56128..9844f713 100644 --- a/models/cv/classification/shufflenetv2_x0_5/igie/README.md +++ b/models/cv/classification/shufflenetv2_x0_5/igie/README.md @@ -10,7 +10,8 @@ convolutions, and efficient building blocks to further reduce computational comp | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/shufflenetv2_x0_5/ixrt/README.md b/models/cv/classification/shufflenetv2_x0_5/ixrt/README.md index efd84595..dc1d4289 100644 --- a/models/cv/classification/shufflenetv2_x0_5/ixrt/README.md +++ b/models/cv/classification/shufflenetv2_x0_5/ixrt/README.md @@ -10,7 +10,8 @@ convolutions, and efficient building blocks to further reduce computational comp | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/shufflenetv2_x1_0/igie/README.md b/models/cv/classification/shufflenetv2_x1_0/igie/README.md index 3e43cf66..0f0ad384 100644 --- a/models/cv/classification/shufflenetv2_x1_0/igie/README.md +++ b/models/cv/classification/shufflenetv2_x1_0/igie/README.md @@ -8,7 +8,8 @@ ShuffleNet V2_x1_0 is an efficient convolutional neural network (CNN) architectu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/shufflenetv2_x1_0/ixrt/README.md b/models/cv/classification/shufflenetv2_x1_0/ixrt/README.md index b2fd0085..5122bd33 100644 --- a/models/cv/classification/shufflenetv2_x1_0/ixrt/README.md +++ b/models/cv/classification/shufflenetv2_x1_0/ixrt/README.md @@ -8,7 +8,8 @@ ShuffleNet V2_x1_0 is an efficient convolutional neural network (CNN) architectu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/shufflenetv2_x1_5/igie/README.md b/models/cv/classification/shufflenetv2_x1_5/igie/README.md index 6eb085a6..d71cd25f 100644 --- a/models/cv/classification/shufflenetv2_x1_5/igie/README.md +++ b/models/cv/classification/shufflenetv2_x1_5/igie/README.md @@ -8,7 +8,8 @@ ShuffleNetV2_x1_5 is a lightweight convolutional neural network specifically des | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/shufflenetv2_x1_5/ixrt/README.md b/models/cv/classification/shufflenetv2_x1_5/ixrt/README.md index 34bb7cbe..c10d0a2c 100644 --- a/models/cv/classification/shufflenetv2_x1_5/ixrt/README.md +++ b/models/cv/classification/shufflenetv2_x1_5/ixrt/README.md @@ -8,7 +8,8 @@ ShuffleNetV2_x1_5 is a lightweight convolutional neural network specifically des | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/shufflenetv2_x2_0/igie/README.md b/models/cv/classification/shufflenetv2_x2_0/igie/README.md index bfca0291..3839929b 100644 --- a/models/cv/classification/shufflenetv2_x2_0/igie/README.md +++ b/models/cv/classification/shufflenetv2_x2_0/igie/README.md @@ -8,7 +8,8 @@ ShuffleNetV2_x2_0 is a lightweight convolutional neural network introduced in th | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/shufflenetv2_x2_0/ixrt/README.md b/models/cv/classification/shufflenetv2_x2_0/ixrt/README.md index ca8b5212..529e5f86 100644 --- a/models/cv/classification/shufflenetv2_x2_0/ixrt/README.md +++ b/models/cv/classification/shufflenetv2_x2_0/ixrt/README.md @@ -8,7 +8,8 @@ ShuffleNetV2_x2_0 is a lightweight convolutional neural network introduced in th | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/squeezenet_v1_0/igie/README.md b/models/cv/classification/squeezenet_v1_0/igie/README.md index c5a5b244..f5840a95 100644 --- a/models/cv/classification/squeezenet_v1_0/igie/README.md +++ b/models/cv/classification/squeezenet_v1_0/igie/README.md @@ -8,7 +8,8 @@ SqueezeNet1_0 is a lightweight convolutional neural network introduced in the pa | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/squeezenet_v1_0/ixrt/README.md b/models/cv/classification/squeezenet_v1_0/ixrt/README.md index dc92042a..69d79fed 100644 --- a/models/cv/classification/squeezenet_v1_0/ixrt/README.md +++ b/models/cv/classification/squeezenet_v1_0/ixrt/README.md @@ -10,7 +10,8 @@ It was developed by researchers at DeepScale and released in 2016. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/squeezenet_v1_1/igie/README.md b/models/cv/classification/squeezenet_v1_1/igie/README.md index 93fec92f..15907572 100644 --- a/models/cv/classification/squeezenet_v1_1/igie/README.md +++ b/models/cv/classification/squeezenet_v1_1/igie/README.md @@ -8,7 +8,8 @@ SqueezeNet 1.1 is an improved version of SqueezeNet, designed for efficient comp | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/squeezenet_v1_1/ixrt/README.md b/models/cv/classification/squeezenet_v1_1/ixrt/README.md index 6f3a4a10..39811e76 100644 --- a/models/cv/classification/squeezenet_v1_1/ixrt/README.md +++ b/models/cv/classification/squeezenet_v1_1/ixrt/README.md @@ -10,7 +10,8 @@ It was developed by researchers at DeepScale and released in 2016. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/svt_base/igie/README.md b/models/cv/classification/svt_base/igie/README.md index 3076aa46..a71bf645 100644 --- a/models/cv/classification/svt_base/igie/README.md +++ b/models/cv/classification/svt_base/igie/README.md @@ -8,7 +8,8 @@ SVT Base is a mid-sized variant of the Sparse Vision Transformer (SVT) series, d | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/swin_transformer/igie/README.md b/models/cv/classification/swin_transformer/igie/README.md index 06acaa7c..22156236 100644 --- a/models/cv/classification/swin_transformer/igie/README.md +++ b/models/cv/classification/swin_transformer/igie/README.md @@ -8,7 +8,8 @@ Swin Transformer is a pioneering neural network architecture that introduces a n | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/swin_transformer_large/ixrt/README.md b/models/cv/classification/swin_transformer_large/ixrt/README.md index 032c961d..fb19aa78 100644 --- a/models/cv/classification/swin_transformer_large/ixrt/README.md +++ b/models/cv/classification/swin_transformer_large/ixrt/README.md @@ -8,7 +8,8 @@ Swin Transformer-Large is a variant of the Swin Transformer, an architecture des | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/twins_pcpvt/igie/README.md b/models/cv/classification/twins_pcpvt/igie/README.md index 21735425..078afa10 100644 --- a/models/cv/classification/twins_pcpvt/igie/README.md +++ b/models/cv/classification/twins_pcpvt/igie/README.md @@ -8,7 +8,8 @@ Twins_PCPVT Small is a lightweight vision transformer model that combines pyrami | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/van_b0/igie/README.md b/models/cv/classification/van_b0/igie/README.md index 421add0e..ae92b6e8 100644 --- a/models/cv/classification/van_b0/igie/README.md +++ b/models/cv/classification/van_b0/igie/README.md @@ -8,7 +8,8 @@ VAN-B0 is a lightweight visual attention network that combines convolution and a | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/vgg11/igie/README.md b/models/cv/classification/vgg11/igie/README.md index 41a9ea9a..f279716d 100644 --- a/models/cv/classification/vgg11/igie/README.md +++ b/models/cv/classification/vgg11/igie/README.md @@ -8,7 +8,8 @@ VGG11 is a deep convolutional neural network introduced by the Visual Geometry G | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/vgg16/igie/README.md b/models/cv/classification/vgg16/igie/README.md index d0fcef2f..f527f15b 100644 --- a/models/cv/classification/vgg16/igie/README.md +++ b/models/cv/classification/vgg16/igie/README.md @@ -8,7 +8,8 @@ VGG16 is a convolutional neural network (CNN) architecture designed for image cl | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/vgg16/ixrt/README.md b/models/cv/classification/vgg16/ixrt/README.md index 589d4a72..c763274c 100644 --- a/models/cv/classification/vgg16/ixrt/README.md +++ b/models/cv/classification/vgg16/ixrt/README.md @@ -9,7 +9,8 @@ It finished second in the 2014 ImageNet Massive Visual Identity Challenge (ILSVR | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/vgg19/igie/README.md b/models/cv/classification/vgg19/igie/README.md index b35cbb4a..7c51ff2a 100644 --- a/models/cv/classification/vgg19/igie/README.md +++ b/models/cv/classification/vgg19/igie/README.md @@ -8,7 +8,8 @@ VGG19 is a member of the VGG network family, proposed by the Visual Geometry Gro | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/vgg19_bn/igie/README.md b/models/cv/classification/vgg19_bn/igie/README.md index 420ca505..6a1bbec4 100644 --- a/models/cv/classification/vgg19_bn/igie/README.md +++ b/models/cv/classification/vgg19_bn/igie/README.md @@ -8,7 +8,8 @@ VGG19_BN is a variant of the VGG network, based on VGG19 with the addition of Ba | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/vit/igie/README.md b/models/cv/classification/vit/igie/README.md index 45b3ddd0..ac909a26 100644 --- a/models/cv/classification/vit/igie/README.md +++ b/models/cv/classification/vit/igie/README.md @@ -8,7 +8,8 @@ ViT is a novel vision model architecture proposed by Google in the paper *An Ima | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/classification/wide_resnet101/igie/README.md b/models/cv/classification/wide_resnet101/igie/README.md index 7abc2c64..24410f6d 100644 --- a/models/cv/classification/wide_resnet101/igie/README.md +++ b/models/cv/classification/wide_resnet101/igie/README.md @@ -8,7 +8,8 @@ Wide ResNet101 is a variant of the ResNet architecture that focuses on increasin | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/wide_resnet50/igie/README.md b/models/cv/classification/wide_resnet50/igie/README.md index 695cc866..b175fb20 100644 --- a/models/cv/classification/wide_resnet50/igie/README.md +++ b/models/cv/classification/wide_resnet50/igie/README.md @@ -8,7 +8,8 @@ The distinguishing feature of Wide ResNet50 lies in its widened architecture com | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/classification/wide_resnet50/ixrt/README.md b/models/cv/classification/wide_resnet50/ixrt/README.md index 52220b4c..7608697f 100644 --- a/models/cv/classification/wide_resnet50/ixrt/README.md +++ b/models/cv/classification/wide_resnet50/ixrt/README.md @@ -8,7 +8,8 @@ The distinguishing feature of Wide ResNet50 lies in its widened architecture com | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/face_recognition/facenet/ixrt/README.md b/models/cv/face_recognition/facenet/ixrt/README.md index cd58c9b9..c44213ad 100644 --- a/models/cv/face_recognition/facenet/ixrt/README.md +++ b/models/cv/face_recognition/facenet/ixrt/README.md @@ -8,7 +8,8 @@ Facenet is a facial recognition system originally proposed and developed by Goog | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/instance_segmentation/solov1/ixrt/README.md b/models/cv/instance_segmentation/solov1/ixrt/README.md index 62ab78a0..e6af2d23 100644 --- a/models/cv/instance_segmentation/solov1/ixrt/README.md +++ b/models/cv/instance_segmentation/solov1/ixrt/README.md @@ -8,7 +8,8 @@ SOLO (Segmenting Objects by Locations) is a new instance segmentation method tha | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/multi_object_tracking/deepsort/igie/README.md b/models/cv/multi_object_tracking/deepsort/igie/README.md index 9988aa78..3eb067d8 100644 --- a/models/cv/multi_object_tracking/deepsort/igie/README.md +++ b/models/cv/multi_object_tracking/deepsort/igie/README.md @@ -8,7 +8,8 @@ DeepSort integrates deep neural networks with traditional tracking methods to ac | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/multi_object_tracking/fastreid/igie/README.md b/models/cv/multi_object_tracking/fastreid/igie/README.md index c08f7853..6e9a7f7f 100644 --- a/models/cv/multi_object_tracking/fastreid/igie/README.md +++ b/models/cv/multi_object_tracking/fastreid/igie/README.md @@ -8,7 +8,8 @@ FastReID is a research platform that implements state-of-the-art re-identificati | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/multi_object_tracking/repnet/igie/README.md b/models/cv/multi_object_tracking/repnet/igie/README.md index 51625f15..7b3c45e4 100644 --- a/models/cv/multi_object_tracking/repnet/igie/README.md +++ b/models/cv/multi_object_tracking/repnet/igie/README.md @@ -8,7 +8,8 @@ The paper "Deep Relative Distance Learning: Tell the Difference Between Similar | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/atss/igie/README.md b/models/cv/object_detection/atss/igie/README.md index 5ba1d3cc..ddce9399 100644 --- a/models/cv/object_detection/atss/igie/README.md +++ b/models/cv/object_detection/atss/igie/README.md @@ -8,7 +8,8 @@ ATSS is an advanced adaptive training sample selection method that effectively e | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/centernet/igie/README.md b/models/cv/object_detection/centernet/igie/README.md index 54316c9f..25115bea 100644 --- a/models/cv/object_detection/centernet/igie/README.md +++ b/models/cv/object_detection/centernet/igie/README.md @@ -8,7 +8,8 @@ CenterNet is an efficient object detection model that simplifies the traditional | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/centernet/ixrt/README.md b/models/cv/object_detection/centernet/ixrt/README.md index a3f4d387..3af16f69 100644 --- a/models/cv/object_detection/centernet/ixrt/README.md +++ b/models/cv/object_detection/centernet/ixrt/README.md @@ -8,7 +8,8 @@ CenterNet is an efficient object detection model that simplifies the traditional | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/detr/ixrt/README.md b/models/cv/object_detection/detr/ixrt/README.md index 25285303..f63ee9e5 100755 --- a/models/cv/object_detection/detr/ixrt/README.md +++ b/models/cv/object_detection/detr/ixrt/README.md @@ -8,7 +8,8 @@ DETR (DEtection TRansformer) is a novel approach that views object detection as | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/fcos/igie/README.md b/models/cv/object_detection/fcos/igie/README.md index 9a57584e..03022c1f 100644 --- a/models/cv/object_detection/fcos/igie/README.md +++ b/models/cv/object_detection/fcos/igie/README.md @@ -8,7 +8,8 @@ FCOS is an innovative one-stage object detection framework that abandons traditi | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/fcos/ixrt/README.md b/models/cv/object_detection/fcos/ixrt/README.md index 721fed15..da669496 100755 --- a/models/cv/object_detection/fcos/ixrt/README.md +++ b/models/cv/object_detection/fcos/ixrt/README.md @@ -9,7 +9,8 @@ For more details, please refer to our [report on Arxiv](https://arxiv.org/abs/19 | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/foveabox/igie/README.md b/models/cv/object_detection/foveabox/igie/README.md index 48b40e88..de20d6a4 100644 --- a/models/cv/object_detection/foveabox/igie/README.md +++ b/models/cv/object_detection/foveabox/igie/README.md @@ -8,7 +8,8 @@ FoveaBox is an advanced anchor-free object detection framework that enhances acc | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/foveabox/ixrt/README.md b/models/cv/object_detection/foveabox/ixrt/README.md index b9dd175c..b9cfed86 100644 --- a/models/cv/object_detection/foveabox/ixrt/README.md +++ b/models/cv/object_detection/foveabox/ixrt/README.md @@ -8,7 +8,8 @@ FoveaBox is an advanced anchor-free object detection framework that enhances acc | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/fsaf/igie/README.md b/models/cv/object_detection/fsaf/igie/README.md index a1fb8a1d..bb6c0e59 100644 --- a/models/cv/object_detection/fsaf/igie/README.md +++ b/models/cv/object_detection/fsaf/igie/README.md @@ -8,7 +8,8 @@ The FSAF (Feature Selective Anchor-Free) module is an innovative component for s | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/fsaf/ixrt/README.md b/models/cv/object_detection/fsaf/ixrt/README.md index 7a6d304c..e4250d3e 100644 --- a/models/cv/object_detection/fsaf/ixrt/README.md +++ b/models/cv/object_detection/fsaf/ixrt/README.md @@ -8,7 +8,8 @@ The FSAF (Feature Selective Anchor-Free) module is an innovative component for s | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/gfl/igie/README.md b/models/cv/object_detection/gfl/igie/README.md index d4174cd2..4f64c7b9 100644 --- a/models/cv/object_detection/gfl/igie/README.md +++ b/models/cv/object_detection/gfl/igie/README.md @@ -8,7 +8,8 @@ GFL (Generalized Focal Loss) is an object detection model that utilizes an impro | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/object_detection/hrnet/igie/README.md b/models/cv/object_detection/hrnet/igie/README.md index 3e584a92..8c1a8be3 100644 --- a/models/cv/object_detection/hrnet/igie/README.md +++ b/models/cv/object_detection/hrnet/igie/README.md @@ -8,7 +8,8 @@ HRNet is an advanced deep learning architecture for human pose estimation, chara | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/hrnet/ixrt/README.md b/models/cv/object_detection/hrnet/ixrt/README.md index f0b27f07..41626061 100644 --- a/models/cv/object_detection/hrnet/ixrt/README.md +++ b/models/cv/object_detection/hrnet/ixrt/README.md @@ -8,7 +8,8 @@ HRNet is an advanced deep learning architecture for human pose estimation, chara | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/paa/igie/README.md b/models/cv/object_detection/paa/igie/README.md index 9f19fc8d..df48de68 100644 --- a/models/cv/object_detection/paa/igie/README.md +++ b/models/cv/object_detection/paa/igie/README.md @@ -8,7 +8,8 @@ PAA (Probabilistic Anchor Assignment) is an algorithm for object detection that | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/retinaface/igie/README.md b/models/cv/object_detection/retinaface/igie/README.md index 1c4d3028..7f354187 100755 --- a/models/cv/object_detection/retinaface/igie/README.md +++ b/models/cv/object_detection/retinaface/igie/README.md @@ -8,7 +8,8 @@ RetinaFace is an efficient single-stage face detection model that employs a mult | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/retinaface/ixrt/README.md b/models/cv/object_detection/retinaface/ixrt/README.md index 2323b20f..424f9183 100644 --- a/models/cv/object_detection/retinaface/ixrt/README.md +++ b/models/cv/object_detection/retinaface/ixrt/README.md @@ -8,7 +8,8 @@ RetinaFace is an efficient single-stage face detection model that employs a mult | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/retinanet/igie/README.md b/models/cv/object_detection/retinanet/igie/README.md index 08477b23..10c8173e 100644 --- a/models/cv/object_detection/retinanet/igie/README.md +++ b/models/cv/object_detection/retinanet/igie/README.md @@ -8,7 +8,8 @@ RetinaNet, an innovative object detector, challenges the conventional trade-off | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/rtmdet/igie/README.md b/models/cv/object_detection/rtmdet/igie/README.md index 825b6280..bd6db5e3 100644 --- a/models/cv/object_detection/rtmdet/igie/README.md +++ b/models/cv/object_detection/rtmdet/igie/README.md @@ -8,7 +8,8 @@ RTMDet, presented by the Shanghai AI Laboratory, is a novel framework for real-t | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/sabl/igie/README.md b/models/cv/object_detection/sabl/igie/README.md index abde9655..1fc9126b 100644 --- a/models/cv/object_detection/sabl/igie/README.md +++ b/models/cv/object_detection/sabl/igie/README.md @@ -8,7 +8,8 @@ SABL (Side-Aware Boundary Localization) is an innovative approach in object dete | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov10/igie/README.md b/models/cv/object_detection/yolov10/igie/README.md index 49193820..928ba8b7 100644 --- a/models/cv/object_detection/yolov10/igie/README.md +++ b/models/cv/object_detection/yolov10/igie/README.md @@ -8,7 +8,8 @@ YOLOv10, built on the Ultralytics Python package by researchers at Tsinghua Univ | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov10/ixrt/README.md b/models/cv/object_detection/yolov10/ixrt/README.md index 6fade83d..c3c5da49 100644 --- a/models/cv/object_detection/yolov10/ixrt/README.md +++ b/models/cv/object_detection/yolov10/ixrt/README.md @@ -8,7 +8,8 @@ YOLOv10, built on the Ultralytics Python package by researchers at Tsinghua Univ | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/object_detection/yolov11/igie/README.md b/models/cv/object_detection/yolov11/igie/README.md index 9bf48116..f3475d39 100644 --- a/models/cv/object_detection/yolov11/igie/README.md +++ b/models/cv/object_detection/yolov11/igie/README.md @@ -8,7 +8,8 @@ YOLOv11 is the latest generation of the YOLO (You Only Look Once) series object | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov11/ixrt/README.md b/models/cv/object_detection/yolov11/ixrt/README.md index 3172be85..3f81a01f 100644 --- a/models/cv/object_detection/yolov11/ixrt/README.md +++ b/models/cv/object_detection/yolov11/ixrt/README.md @@ -8,7 +8,8 @@ YOLOv11 is the latest generation of the YOLO (You Only Look Once) series object | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/object_detection/yolov12/igie/README.md b/models/cv/object_detection/yolov12/igie/README.md index 6299af3a..d27ff639 100644 --- a/models/cv/object_detection/yolov12/igie/README.md +++ b/models/cv/object_detection/yolov12/igie/README.md @@ -8,7 +8,8 @@ YOLOv12 achieves high precision and efficient real-time object detection by inte | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/object_detection/yolov3/igie/README.md b/models/cv/object_detection/yolov3/igie/README.md index d469210d..5bce55df 100644 --- a/models/cv/object_detection/yolov3/igie/README.md +++ b/models/cv/object_detection/yolov3/igie/README.md @@ -8,7 +8,8 @@ YOLOv3 is a influential object detection algorithm.The key innovation of YOLOv3 | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov3/ixrt/README.md b/models/cv/object_detection/yolov3/ixrt/README.md index b7976890..6963361c 100644 --- a/models/cv/object_detection/yolov3/ixrt/README.md +++ b/models/cv/object_detection/yolov3/ixrt/README.md @@ -8,7 +8,8 @@ YOLOv3 is a influential object detection algorithm.The key innovation of YOLOv3 | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov4/igie/README.md b/models/cv/object_detection/yolov4/igie/README.md index c0753f51..4ef00254 100644 --- a/models/cv/object_detection/yolov4/igie/README.md +++ b/models/cv/object_detection/yolov4/igie/README.md @@ -8,7 +8,8 @@ YOLOv4 employs a two-step process, involving regression for bounding box positio | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov4/ixrt/README.md b/models/cv/object_detection/yolov4/ixrt/README.md index f6bd831e..14810cf9 100644 --- a/models/cv/object_detection/yolov4/ixrt/README.md +++ b/models/cv/object_detection/yolov4/ixrt/README.md @@ -8,7 +8,8 @@ YOLOv4 employs a two-step process, involving regression for bounding box positio | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov5/igie/README.md b/models/cv/object_detection/yolov5/igie/README.md index 55f8b162..6a8f30d9 100644 --- a/models/cv/object_detection/yolov5/igie/README.md +++ b/models/cv/object_detection/yolov5/igie/README.md @@ -8,7 +8,8 @@ The YOLOv5 architecture is designed for efficient and accurate object detection | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov5/ixrt/README.md b/models/cv/object_detection/yolov5/ixrt/README.md index 69f56840..de030adb 100644 --- a/models/cv/object_detection/yolov5/ixrt/README.md +++ b/models/cv/object_detection/yolov5/ixrt/README.md @@ -8,7 +8,8 @@ The YOLOv5 architecture is designed for efficient and accurate object detection | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov5s/ixrt/README.md b/models/cv/object_detection/yolov5s/ixrt/README.md index 88f55f22..1e216cfb 100755 --- a/models/cv/object_detection/yolov5s/ixrt/README.md +++ b/models/cv/object_detection/yolov5s/ixrt/README.md @@ -8,7 +8,8 @@ The YOLOv5 architecture is designed for efficient and accurate object detection | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov6/igie/README.md b/models/cv/object_detection/yolov6/igie/README.md index 4a31e67a..bbb1ba20 100644 --- a/models/cv/object_detection/yolov6/igie/README.md +++ b/models/cv/object_detection/yolov6/igie/README.md @@ -8,7 +8,8 @@ YOLOv6 integrates cutting-edge object detection advancements from industry and a | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov6/ixrt/README.md b/models/cv/object_detection/yolov6/ixrt/README.md index 713c1f60..947cbc69 100644 --- a/models/cv/object_detection/yolov6/ixrt/README.md +++ b/models/cv/object_detection/yolov6/ixrt/README.md @@ -8,7 +8,8 @@ YOLOv6 integrates cutting-edge object detection advancements from industry and a | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov7/igie/README.md b/models/cv/object_detection/yolov7/igie/README.md index 1a12c661..e5979c9c 100644 --- a/models/cv/object_detection/yolov7/igie/README.md +++ b/models/cv/object_detection/yolov7/igie/README.md @@ -8,7 +8,8 @@ YOLOv7 is a state-of-the-art real-time object detector that surpasses all known | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov7/ixrt/README.md b/models/cv/object_detection/yolov7/ixrt/README.md index 8ff917cc..641440a5 100644 --- a/models/cv/object_detection/yolov7/ixrt/README.md +++ b/models/cv/object_detection/yolov7/ixrt/README.md @@ -8,7 +8,8 @@ YOLOv7 is an object detection model based on the YOLO (You Only Look Once) serie | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov8/igie/README.md b/models/cv/object_detection/yolov8/igie/README.md index 714cda3c..7b069b0b 100644 --- a/models/cv/object_detection/yolov8/igie/README.md +++ b/models/cv/object_detection/yolov8/igie/README.md @@ -8,7 +8,8 @@ Yolov8 combines speed and accuracy in real-time object detection tasks. With a f | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov8/ixrt/README.md b/models/cv/object_detection/yolov8/ixrt/README.md index a6c0e003..f7ea4dda 100644 --- a/models/cv/object_detection/yolov8/ixrt/README.md +++ b/models/cv/object_detection/yolov8/ixrt/README.md @@ -8,7 +8,8 @@ Yolov8 combines speed and accuracy in real-time object detection tasks. With a f | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov9/igie/README.md b/models/cv/object_detection/yolov9/igie/README.md index e52c3629..4bec9a28 100644 --- a/models/cv/object_detection/yolov9/igie/README.md +++ b/models/cv/object_detection/yolov9/igie/README.md @@ -8,7 +8,8 @@ YOLOv9 represents a major leap in real-time object detection by introducing inno | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolov9/ixrt/README.md b/models/cv/object_detection/yolov9/ixrt/README.md index 806be63a..b9e1b174 100644 --- a/models/cv/object_detection/yolov9/ixrt/README.md +++ b/models/cv/object_detection/yolov9/ixrt/README.md @@ -8,7 +8,8 @@ YOLOv9 represents a major leap in real-time object detection by introducing inno | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/cv/object_detection/yolox/igie/README.md b/models/cv/object_detection/yolox/igie/README.md index 1b0cf15f..df427312 100644 --- a/models/cv/object_detection/yolox/igie/README.md +++ b/models/cv/object_detection/yolox/igie/README.md @@ -8,7 +8,8 @@ YOLOX is an anchor-free version of YOLO, with a simpler design but better perfor | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/object_detection/yolox/ixrt/README.md b/models/cv/object_detection/yolox/ixrt/README.md index e372ac05..2d4fb0c8 100644 --- a/models/cv/object_detection/yolox/ixrt/README.md +++ b/models/cv/object_detection/yolox/ixrt/README.md @@ -9,7 +9,8 @@ For more details, please refer to our [report on Arxiv](https://arxiv.org/abs/21 | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/ocr/kie_layoutxlm/igie/README.md b/models/cv/ocr/kie_layoutxlm/igie/README.md index 5ad55dd9..bc64cfc1 100644 --- a/models/cv/ocr/kie_layoutxlm/igie/README.md +++ b/models/cv/ocr/kie_layoutxlm/igie/README.md @@ -8,7 +8,8 @@ LayoutXLM is a groundbreaking multimodal pre-trained model for multilingual docu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/ocr/svtr/igie/README.md b/models/cv/ocr/svtr/igie/README.md index f5e7ad54..f9aedccd 100644 --- a/models/cv/ocr/svtr/igie/README.md +++ b/models/cv/ocr/svtr/igie/README.md @@ -8,7 +8,8 @@ SVTR proposes a single vision model for scene text recognition. This model compl | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/pose_estimation/hrnetpose/igie/README.md b/models/cv/pose_estimation/hrnetpose/igie/README.md index 7af12fba..bf366b34 100644 --- a/models/cv/pose_estimation/hrnetpose/igie/README.md +++ b/models/cv/pose_estimation/hrnetpose/igie/README.md @@ -8,7 +8,8 @@ HRNetPose (High-Resolution Network for Pose Estimation) is a high-performance hu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/pose_estimation/lightweight_openpose/ixrt/README.md b/models/cv/pose_estimation/lightweight_openpose/ixrt/README.md index c784c208..54b8579a 100644 --- a/models/cv/pose_estimation/lightweight_openpose/ixrt/README.md +++ b/models/cv/pose_estimation/lightweight_openpose/ixrt/README.md @@ -12,7 +12,8 @@ inference (no flip or any post-processing done). | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/pose_estimation/rtmpose/igie/README.md b/models/cv/pose_estimation/rtmpose/igie/README.md index a615bd46..6b6a7eec 100644 --- a/models/cv/pose_estimation/rtmpose/igie/README.md +++ b/models/cv/pose_estimation/rtmpose/igie/README.md @@ -8,7 +8,8 @@ RTMPose, a state-of-the-art framework developed by Shanghai AI Laboratory, excel | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/pose_estimation/rtmpose/ixrt/README.md b/models/cv/pose_estimation/rtmpose/ixrt/README.md index cdd76938..c7576241 100644 --- a/models/cv/pose_estimation/rtmpose/ixrt/README.md +++ b/models/cv/pose_estimation/rtmpose/ixrt/README.md @@ -8,7 +8,8 @@ RTMPose, a state-of-the-art framework developed by Shanghai AI Laboratory, excel | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/cv/semantic_segmentation/unet/igie/README.md b/models/cv/semantic_segmentation/unet/igie/README.md index 62220355..a168b53f 100644 --- a/models/cv/semantic_segmentation/unet/igie/README.md +++ b/models/cv/semantic_segmentation/unet/igie/README.md @@ -8,7 +8,8 @@ UNet is a convolutional neural network architecture for image segmentation, feat | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/multimodal/diffusion_model/stable-diffusion/diffusers/README.md b/models/multimodal/diffusion_model/stable-diffusion/diffusers/README.md index ad3e62c9..e1bce0d1 100644 --- a/models/multimodal/diffusion_model/stable-diffusion/diffusers/README.md +++ b/models/multimodal/diffusion_model/stable-diffusion/diffusers/README.md @@ -8,7 +8,8 @@ Stable Diffusion is a latent text-to-image diffusion model capable of generating | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/aria/vllm/README.md b/models/multimodal/vision_language_model/aria/vllm/README.md index 94cdb57d..8d4163ac 100644 --- a/models/multimodal/vision_language_model/aria/vllm/README.md +++ b/models/multimodal/vision_language_model/aria/vllm/README.md @@ -12,6 +12,7 @@ Aria is a multimodal native MoE model. It features: | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | vLLM | Release | | :----: | :----: | :----: | :----: | +| MR-V100 | 4.3.0 | >=0.6.4 | 25.09 | | MR-V100 | 4.2.0 | >=0.6.6 | 25.06 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/chameleon_7b/vllm/README.md b/models/multimodal/vision_language_model/chameleon_7b/vllm/README.md index 7a488b0a..fa903873 100755 --- a/models/multimodal/vision_language_model/chameleon_7b/vllm/README.md +++ b/models/multimodal/vision_language_model/chameleon_7b/vllm/README.md @@ -8,7 +8,8 @@ Chameleon, an AI system that mitigates these limitations by augmenting LLMs with | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/clip/ixformer/README.md b/models/multimodal/vision_language_model/clip/ixformer/README.md index 5d20f50c..f870b0d6 100644 --- a/models/multimodal/vision_language_model/clip/ixformer/README.md +++ b/models/multimodal/vision_language_model/clip/ixformer/README.md @@ -8,7 +8,8 @@ CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/fuyu_8b/vllm/README.md b/models/multimodal/vision_language_model/fuyu_8b/vllm/README.md index d13e0b36..559b1399 100755 --- a/models/multimodal/vision_language_model/fuyu_8b/vllm/README.md +++ b/models/multimodal/vision_language_model/fuyu_8b/vllm/README.md @@ -12,7 +12,8 @@ transformer decoder like an image transformer (albeit with no pooling and causal | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/h2vol/vllm/README.md b/models/multimodal/vision_language_model/h2vol/vllm/README.md index 0013e2e7..4d72af6c 100644 --- a/models/multimodal/vision_language_model/h2vol/vllm/README.md +++ b/models/multimodal/vision_language_model/h2vol/vllm/README.md @@ -12,6 +12,7 @@ language tasks. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | vLLM | Release | | :----: | :----: | :----: | :----: | +| MR-V100 | 4.3.0 | >=0.6.4 | 25.09 | | MR-V100 | 4.2.0 | >=0.6.4 | 25.06 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/idefics3/vllm/README.md b/models/multimodal/vision_language_model/idefics3/vllm/README.md index 78d4117c..75b34aa1 100644 --- a/models/multimodal/vision_language_model/idefics3/vllm/README.md +++ b/models/multimodal/vision_language_model/idefics3/vllm/README.md @@ -11,6 +11,7 @@ significantly enhancing capabilities around OCR, document understanding and visu | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | vLLM | Release | | :----: | :----: | :----: | :----: | +| MR-V100 | 4.3.0 | >=0.6.4 | 25.09 | | MR-V100 | 4.2.0 | >=0.6.4 | 25.06 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/intern_vl/vllm/README.md b/models/multimodal/vision_language_model/intern_vl/vllm/README.md index c337a340..dc9d06b2 100644 --- a/models/multimodal/vision_language_model/intern_vl/vllm/README.md +++ b/models/multimodal/vision_language_model/intern_vl/vllm/README.md @@ -11,7 +11,8 @@ learning. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/llama-3.2/vllm/README.md b/models/multimodal/vision_language_model/llama-3.2/vllm/README.md index b6aab078..7fddcc72 100644 --- a/models/multimodal/vision_language_model/llama-3.2/vllm/README.md +++ b/models/multimodal/vision_language_model/llama-3.2/vllm/README.md @@ -11,7 +11,8 @@ outperform many of the available open source and closed chat models on common in | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/llava/vllm/README.md b/models/multimodal/vision_language_model/llava/vllm/README.md index 78a21190..7027191f 100644 --- a/models/multimodal/vision_language_model/llava/vllm/README.md +++ b/models/multimodal/vision_language_model/llava/vllm/README.md @@ -13,7 +13,8 @@ reasoning. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/llava_next_video_7b/vllm/README.md b/models/multimodal/vision_language_model/llava_next_video_7b/vllm/README.md index 31b5622f..17857d0e 100755 --- a/models/multimodal/vision_language_model/llava_next_video_7b/vllm/README.md +++ b/models/multimodal/vision_language_model/llava_next_video_7b/vllm/README.md @@ -11,7 +11,8 @@ models on VideoMME bench. Base LLM: lmsys/vicuna-7b-v1.5 | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/minicpm_v/vllm/README.md b/models/multimodal/vision_language_model/minicpm_v/vllm/README.md index ea1c8d74..ef8fa31b 100644 --- a/models/multimodal/vision_language_model/minicpm_v/vllm/README.md +++ b/models/multimodal/vision_language_model/minicpm_v/vllm/README.md @@ -10,7 +10,8 @@ techniques, making it suitable for deployment in resource-constrained environmen | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/multimodal/vision_language_model/pixtral/vllm/README.md b/models/multimodal/vision_language_model/pixtral/vllm/README.md index bb3abd99..5ef06c0e 100644 --- a/models/multimodal/vision_language_model/pixtral/vllm/README.md +++ b/models/multimodal/vision_language_model/pixtral/vllm/README.md @@ -8,7 +8,8 @@ Pixtral is trained to understand both natural images and documents, achieving 52 | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.06 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.06 | ## Model Preparation diff --git a/models/nlp/llm/baichuan2-7b/vllm/README.md b/models/nlp/llm/baichuan2-7b/vllm/README.md index 95afd0d7..21144b8a 100755 --- a/models/nlp/llm/baichuan2-7b/vllm/README.md +++ b/models/nlp/llm/baichuan2-7b/vllm/README.md @@ -11,7 +11,8 @@ its excellent capabilities in language understanding and generation.This release | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/chatglm3-6b-32k/vllm/README.md b/models/nlp/llm/chatglm3-6b-32k/vllm/README.md index e42fad9b..7cfe5845 100644 --- a/models/nlp/llm/chatglm3-6b-32k/vllm/README.md +++ b/models/nlp/llm/chatglm3-6b-32k/vllm/README.md @@ -12,7 +12,8 @@ we recommend using ChatGLM3-6B-32K. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/chatglm3-6b/vllm/README.md b/models/nlp/llm/chatglm3-6b/vllm/README.md index 8f991f85..f8914d93 100644 --- a/models/nlp/llm/chatglm3-6b/vllm/README.md +++ b/models/nlp/llm/chatglm3-6b/vllm/README.md @@ -10,7 +10,8 @@ translation. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/deepseek-r1-distill-llama-70b/vllm/README.md b/models/nlp/llm/deepseek-r1-distill-llama-70b/vllm/README.md index 539fa730..2cd9a10e 100644 --- a/models/nlp/llm/deepseek-r1-distill-llama-70b/vllm/README.md +++ b/models/nlp/llm/deepseek-r1-distill-llama-70b/vllm/README.md @@ -10,7 +10,8 @@ based on Qwen2.5 and Llama3 series to the community. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/deepseek-r1-distill-llama-8b/vllm/README.md b/models/nlp/llm/deepseek-r1-distill-llama-8b/vllm/README.md index 4f94a027..61cc5aa2 100644 --- a/models/nlp/llm/deepseek-r1-distill-llama-8b/vllm/README.md +++ b/models/nlp/llm/deepseek-r1-distill-llama-8b/vllm/README.md @@ -10,7 +10,8 @@ based on Qwen2.5 and Llama3 series to the community. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/deepseek-r1-distill-qwen-1.5b/vllm/README.md b/models/nlp/llm/deepseek-r1-distill-qwen-1.5b/vllm/README.md index 31d38e4d..6b3642bf 100644 --- a/models/nlp/llm/deepseek-r1-distill-qwen-1.5b/vllm/README.md +++ b/models/nlp/llm/deepseek-r1-distill-qwen-1.5b/vllm/README.md @@ -10,7 +10,8 @@ based on Qwen2.5 and Llama3 series to the community. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/deepseek-r1-distill-qwen-14b/vllm/README.md b/models/nlp/llm/deepseek-r1-distill-qwen-14b/vllm/README.md index 20c1e9b5..14ecabfa 100644 --- a/models/nlp/llm/deepseek-r1-distill-qwen-14b/vllm/README.md +++ b/models/nlp/llm/deepseek-r1-distill-qwen-14b/vllm/README.md @@ -10,7 +10,8 @@ DeepSeek-R1. We slightly change their configs and tokenizers. We open-source di | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/deepseek-r1-distill-qwen-32b/vllm/README.md b/models/nlp/llm/deepseek-r1-distill-qwen-32b/vllm/README.md index 7d83e8c3..5b6611b8 100644 --- a/models/nlp/llm/deepseek-r1-distill-qwen-32b/vllm/README.md +++ b/models/nlp/llm/deepseek-r1-distill-qwen-32b/vllm/README.md @@ -10,7 +10,8 @@ DeepSeek-R1. We slightly change their configs and tokenizers. We open-source di | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/deepseek-r1-distill-qwen-7b/vllm/README.md b/models/nlp/llm/deepseek-r1-distill-qwen-7b/vllm/README.md index 76612f36..e5dd8166 100644 --- a/models/nlp/llm/deepseek-r1-distill-qwen-7b/vllm/README.md +++ b/models/nlp/llm/deepseek-r1-distill-qwen-7b/vllm/README.md @@ -10,7 +10,8 @@ based on Qwen2.5 and Llama3 series to the community. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/llama2-13b/trtllm/README.md b/models/nlp/llm/llama2-13b/trtllm/README.md index e1b39319..cdcf0756 100755 --- a/models/nlp/llm/llama2-13b/trtllm/README.md +++ b/models/nlp/llm/llama2-13b/trtllm/README.md @@ -11,7 +11,8 @@ from 7B to 70B. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/llama2-70b/trtllm/README.md b/models/nlp/llm/llama2-70b/trtllm/README.md index d1437323..d0336650 100644 --- a/models/nlp/llm/llama2-70b/trtllm/README.md +++ b/models/nlp/llm/llama2-70b/trtllm/README.md @@ -13,7 +13,8 @@ and contribute to the responsible development of LLMs. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/llama2-7b/trtllm/README.md b/models/nlp/llm/llama2-7b/trtllm/README.md index 9f69636b..ecd8ddf8 100644 --- a/models/nlp/llm/llama2-7b/trtllm/README.md +++ b/models/nlp/llm/llama2-7b/trtllm/README.md @@ -13,7 +13,8 @@ and contribute to the responsible development of LLMs. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/llama2-7b/vllm/README.md b/models/nlp/llm/llama2-7b/vllm/README.md index b4d7bf1c..78f90811 100755 --- a/models/nlp/llm/llama2-7b/vllm/README.md +++ b/models/nlp/llm/llama2-7b/vllm/README.md @@ -13,7 +13,8 @@ and contribute to the responsible development of LLMs. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/llama3-70b/vllm/README.md b/models/nlp/llm/llama3-70b/vllm/README.md index 43a76589..77ca3743 100644 --- a/models/nlp/llm/llama3-70b/vllm/README.md +++ b/models/nlp/llm/llama3-70b/vllm/README.md @@ -13,7 +13,8 @@ large-scale AI applications, offering enhanced reasoning and instruction-followi | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/qwen-7b/vllm/README.md b/models/nlp/llm/qwen-7b/vllm/README.md index de2d1a7c..b7051d69 100644 --- a/models/nlp/llm/qwen-7b/vllm/README.md +++ b/models/nlp/llm/qwen-7b/vllm/README.md @@ -13,7 +13,8 @@ developing intelligent agent applications. It also includes specialized versions | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/qwen1.5-14b/vllm/README.md b/models/nlp/llm/qwen1.5-14b/vllm/README.md index 5a520db3..fd431b2d 100644 --- a/models/nlp/llm/qwen1.5-14b/vllm/README.md +++ b/models/nlp/llm/qwen1.5-14b/vllm/README.md @@ -12,7 +12,8 @@ not include GQA (except for 32B) and the mixture of SWA and full attention. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/qwen1.5-32b/vllm/README.md b/models/nlp/llm/qwen1.5-32b/vllm/README.md index 69ac33dd..158d882a 100755 --- a/models/nlp/llm/qwen1.5-32b/vllm/README.md +++ b/models/nlp/llm/qwen1.5-32b/vllm/README.md @@ -11,7 +11,8 @@ have an improved tokenizer adaptive to multiple natural languages and codes. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/qwen1.5-72b/vllm/README.md b/models/nlp/llm/qwen1.5-72b/vllm/README.md index aba28082..ab26f60a 100644 --- a/models/nlp/llm/qwen1.5-72b/vllm/README.md +++ b/models/nlp/llm/qwen1.5-72b/vllm/README.md @@ -12,7 +12,8 @@ not include GQA (except for 32B) and the mixture of SWA and full attention. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/qwen1.5-7b/tgi/README.md b/models/nlp/llm/qwen1.5-7b/tgi/README.md index e00da336..34ea6430 100644 --- a/models/nlp/llm/qwen1.5-7b/tgi/README.md +++ b/models/nlp/llm/qwen1.5-7b/tgi/README.md @@ -12,7 +12,8 @@ not include GQA (except for 32B) and the mixture of SWA and full attention. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/qwen1.5-7b/vllm/README.md b/models/nlp/llm/qwen1.5-7b/vllm/README.md index 7a9cc65f..6e71dc32 100644 --- a/models/nlp/llm/qwen1.5-7b/vllm/README.md +++ b/models/nlp/llm/qwen1.5-7b/vllm/README.md @@ -12,7 +12,8 @@ not include GQA (except for 32B) and the mixture of SWA and full attention. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/qwen2-72b/vllm/README.md b/models/nlp/llm/qwen2-72b/vllm/README.md index 74200cf7..69a0cc9c 100755 --- a/models/nlp/llm/qwen2-72b/vllm/README.md +++ b/models/nlp/llm/qwen2-72b/vllm/README.md @@ -18,7 +18,8 @@ Please refer to this section for detailed instructions on how to deploy Qwen2 fo | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/qwen2-7b/vllm/README.md b/models/nlp/llm/qwen2-7b/vllm/README.md index 5bcd6b53..b7b28e45 100755 --- a/models/nlp/llm/qwen2-7b/vllm/README.md +++ b/models/nlp/llm/qwen2-7b/vllm/README.md @@ -17,7 +17,8 @@ Qwen2-7B-Instruct supports a context length of up to 131,072 tokens, enabling th | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/llm/stablelm/vllm/README.md b/models/nlp/llm/stablelm/vllm/README.md index ffcdefdf..ceeafabe 100644 --- a/models/nlp/llm/stablelm/vllm/README.md +++ b/models/nlp/llm/stablelm/vllm/README.md @@ -12,7 +12,8 @@ contextual relationships, which enhances the quality and accuracy of the generat | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/albert/ixrt/README.md b/models/nlp/plm/albert/ixrt/README.md index 778719bd..d53712c9 100644 --- a/models/nlp/plm/albert/ixrt/README.md +++ b/models/nlp/plm/albert/ixrt/README.md @@ -8,7 +8,8 @@ Albert (A Lite BERT) is a variant of the BERT (Bidirectional Encoder Representat | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/bert_base_ner/igie/README.md b/models/nlp/plm/bert_base_ner/igie/README.md index ab6fd88b..f90ceb6e 100644 --- a/models/nlp/plm/bert_base_ner/igie/README.md +++ b/models/nlp/plm/bert_base_ner/igie/README.md @@ -8,7 +8,8 @@ BERT is designed to pre-train deep bidirectional representations from unlabeled | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/bert_base_squad/igie/README.md b/models/nlp/plm/bert_base_squad/igie/README.md index ac7477f9..9e42dde8 100644 --- a/models/nlp/plm/bert_base_squad/igie/README.md +++ b/models/nlp/plm/bert_base_squad/igie/README.md @@ -8,7 +8,8 @@ BERT is designed to pre-train deep bidirectional representations from unlabeled | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/bert_base_squad/ixrt/README.md b/models/nlp/plm/bert_base_squad/ixrt/README.md index 1f3dd395..b9569a6c 100644 --- a/models/nlp/plm/bert_base_squad/ixrt/README.md +++ b/models/nlp/plm/bert_base_squad/ixrt/README.md @@ -8,7 +8,8 @@ BERT is designed to pre-train deep bidirectional representations from unlabeled | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/bert_large_squad/igie/README.md b/models/nlp/plm/bert_large_squad/igie/README.md index e1d14358..9182202f 100644 --- a/models/nlp/plm/bert_large_squad/igie/README.md +++ b/models/nlp/plm/bert_large_squad/igie/README.md @@ -8,7 +8,8 @@ BERT is designed to pre-train deep bidirectional representations from unlabeled | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/bert_large_squad/ixrt/README.md b/models/nlp/plm/bert_large_squad/ixrt/README.md index f6603413..0670e856 100644 --- a/models/nlp/plm/bert_large_squad/ixrt/README.md +++ b/models/nlp/plm/bert_large_squad/ixrt/README.md @@ -8,7 +8,8 @@ BERT is designed to pre-train deep bidirectional representations from unlabeled | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/deberta/ixrt/README.md b/models/nlp/plm/deberta/ixrt/README.md index 87496848..cd2e30b1 100644 --- a/models/nlp/plm/deberta/ixrt/README.md +++ b/models/nlp/plm/deberta/ixrt/README.md @@ -13,7 +13,8 @@ fine-tuning to better suit specific downstream tasks, thereby improving the mode | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/roberta/ixrt/README.md b/models/nlp/plm/roberta/ixrt/README.md index 92cc8e4e..a25cf804 100644 --- a/models/nlp/plm/roberta/ixrt/README.md +++ b/models/nlp/plm/roberta/ixrt/README.md @@ -15,7 +15,8 @@ our models and code. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/nlp/plm/roformer/ixrt/README.md b/models/nlp/plm/roformer/ixrt/README.md index 5d37b5e6..3b90ce7a 100644 --- a/models/nlp/plm/roformer/ixrt/README.md +++ b/models/nlp/plm/roformer/ixrt/README.md @@ -17,7 +17,8 @@ datasets. | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation diff --git a/models/others/recommendation/wide_and_deep/ixrt/README.md b/models/others/recommendation/wide_and_deep/ixrt/README.md index 22796241..f50911d1 100644 --- a/models/others/recommendation/wide_and_deep/ixrt/README.md +++ b/models/others/recommendation/wide_and_deep/ixrt/README.md @@ -8,7 +8,8 @@ Generalized linear models with nonlinear feature transformations are widely used | GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | | :----: | :----: | :----: | -| MR-V100 | 4.2.0 | 25.03 | +| MR-V100 | 4.3.0 | 25.09 | +| MR-V100 | 4.2.0 | 25.03 | ## Model Preparation -- Gitee