diff --git a/cv/classification/acmix/pytorch/README.md b/cv/classification/acmix/pytorch/README.md index 724ed82fc23d33e88acace40dbb2b3717b071e0a..68e584912018e7f506ca35bcc25bed0647673256 100644 --- a/cv/classification/acmix/pytorch/README.md +++ b/cv/classification/acmix/pytorch/README.md @@ -9,6 +9,13 @@ the local feature extraction of convolutions and the global context modeling of improved performance on image recognition tasks with minimal computational overhead compared to pure convolution or attention-based approaches. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/acnet/pytorch/README.md b/cv/classification/acnet/pytorch/README.md index 65a5e803771abfa718722e6e06dd3d6cbb0ff3dd..2b8f3fedf2285be6d11b275f2bf06fd372768fc9 100755 --- a/cv/classification/acnet/pytorch/README.md +++ b/cv/classification/acnet/pytorch/README.md @@ -9,6 +9,13 @@ be seamlessly integrated into existing architectures, boosting accuracy without training, ACNet converts back to the original architecture, maintaining efficiency. It demonstrates consistent performance improvements across various models on datasets like CIFAR and ImageNet. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/alexnet/pytorch/README.md b/cv/classification/alexnet/pytorch/README.md index 699735310d33909d8231032fa480885bafb3b89e..b7eee5b187273ba4435a553fbc8354e32be76874 100644 --- a/cv/classification/alexnet/pytorch/README.md +++ b/cv/classification/alexnet/pytorch/README.md @@ -10,6 +10,13 @@ principles continue to influence modern neural network architectures in computer classic convolutional neural network architecture. It consists of convolutions, max pooling and dense layers as the basic building blocks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/alexnet/tensorflow/README.md b/cv/classification/alexnet/tensorflow/README.md index 59a8cb476769e2fda7d42932446516070514e0a6..22c0268083850ff0096b1a1478ac6e75c400504a 100644 --- a/cv/classification/alexnet/tensorflow/README.md +++ b/cv/classification/alexnet/tensorflow/README.md @@ -8,6 +8,13 @@ innovations like ReLU activations, dropout regularization, and GPU acceleration. success popularized deep learning and established CNNs as the dominant approach for image recognition. AlexNet's design principles continue to influence modern neural network architectures in computer vision applications. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/byol/pytorch/README.md b/cv/classification/byol/pytorch/README.md index 3b53a8d0b66e1a37d9058b641564f1d702cc8b1d..db43138c7aa020f1020092dede0f3e1d9802c44b 100644 --- a/cv/classification/byol/pytorch/README.md +++ b/cv/classification/byol/pytorch/README.md @@ -8,6 +8,13 @@ through contrasting augmented views of the same image. BYOL's unique approach el achieving state-of-the-art performance in unsupervised learning. It's particularly effective for pre-training models on large datasets before fine-tuning for specific tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/cbam/pytorch/README.md b/cv/classification/cbam/pytorch/README.md index 43d353876469b368eac57160586366e64828725f..370bc1a21bc5070df31cfb64bff89eceb96dd259 100644 --- a/cv/classification/cbam/pytorch/README.md +++ b/cv/classification/cbam/pytorch/README.md @@ -8,6 +8,13 @@ significant computational overhead. CBAM helps networks focus on important featu leading to better object recognition and localization. The module is lightweight and can be easily integrated into existing CNN architectures, making it a versatile tool for improving various computer vision tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.06 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/convnext/pytorch/README.md b/cv/classification/convnext/pytorch/README.md index aab8f3f6648adf85bfbfce2e85d755c00e402547..f2ccfdac62dccab818297820a89a0e680e6fb442 100644 --- a/cv/classification/convnext/pytorch/README.md +++ b/cv/classification/convnext/pytorch/README.md @@ -9,6 +9,13 @@ modernized ConvNets can match or exceed Transformer-based models in accuracy and Its simplicity and strong performance make it a compelling choice for image classification and other computer vision applications. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/cspdarknet53/pytorch/README.md b/cv/classification/cspdarknet53/pytorch/README.md index c4812e97f19dbc5e0452e9b1a8c59fcd572d504b..1d5535339d18910ba402f6e36d11ee494db73f32 100644 --- a/cv/classification/cspdarknet53/pytorch/README.md +++ b/cv/classification/cspdarknet53/pytorch/README.md @@ -8,6 +8,13 @@ maps across stages. The model achieves better gradient flow and reduces memory u architectures. CspDarknet53 is particularly effective in real-time detection tasks, offering a good balance between accuracy and speed, making it popular in modern object detection frameworks like YOLOv4. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.06 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/densenet/paddlepaddle/README.md b/cv/classification/densenet/paddlepaddle/README.md index df4f0002461ba6adf074c78ff1d4e2f6134dd64d..579500a9a78378b09f826c19e5e0c2b31ae6812c 100644 --- a/cv/classification/densenet/paddlepaddle/README.md +++ b/cv/classification/densenet/paddlepaddle/README.md @@ -8,6 +8,13 @@ subsequent layers. This dense connectivity pattern improves gradient flow, encou vanishing gradient problems. DenseNet achieves state-of-the-art performance with fewer parameters compared to traditional CNNs, making it efficient for various computer vision tasks like image classification and object detection. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/densenet/pytorch/README.md b/cv/classification/densenet/pytorch/README.md index 454f846aea25046b1b1437fd269c3537973382da..8283926e9dcff4a81f2a49a10ac082d0f163af6c 100755 --- a/cv/classification/densenet/pytorch/README.md +++ b/cv/classification/densenet/pytorch/README.md @@ -8,6 +8,13 @@ subsequent layers. This dense connectivity pattern improves gradient flow, encou vanishing gradient problems. DenseNet achieves state-of-the-art performance with fewer parameters compared to traditional CNNs, making it efficient for various computer vision tasks like image classification and object detection. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/dpn107/pytorch/README.md b/cv/classification/dpn107/pytorch/README.md index 3bf787dee93a6091207e40348dbc550d911cc6ca..84f8bf94ae0f55d3c28933d01135f4751d2ec160 100644 --- a/cv/classification/dpn107/pytorch/README.md +++ b/cv/classification/dpn107/pytorch/README.md @@ -8,6 +8,13 @@ preserving important features and another for discovering new ones. DPN107 achie image classification tasks while maintaining computational efficiency. Its unique design makes it particularly effective for complex visual recognition tasks, offering a balance between model accuracy and resource utilization. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/dpn92/pytorch/README.md b/cv/classification/dpn92/pytorch/README.md index 9dc35d69446b46e8309aeee446281dbe55d8cce5..cbfaa7eaf00d50c0ff14ee48fa1b389e8e2c84d8 100644 --- a/cv/classification/dpn92/pytorch/README.md +++ b/cv/classification/dpn92/pytorch/README.md @@ -8,6 +8,13 @@ enables efficient learning of both shared and new features. DPN92 achieves state classification tasks while maintaining computational efficiency. Its unique architecture makes it particularly effective for tasks requiring both feature preservation and discovery. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/eca_mobilenet_v2/pytorch/README.md b/cv/classification/eca_mobilenet_v2/pytorch/README.md index 7a0a347175ba7c7d26b08bc05fb515975cd9cdac..25f34b0fe844359e66560e7f918479c48e8f9196 100644 --- a/cv/classification/eca_mobilenet_v2/pytorch/README.md +++ b/cv/classification/eca_mobilenet_v2/pytorch/README.md @@ -9,6 +9,13 @@ maintaining computational efficiency, making it suitable for mobile and edge dev accuracy than standard MobileNet V2 with minimal additional parameters, making it ideal for resource-constrained image classification tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/eca_resnet152/pytorch/README.md b/cv/classification/eca_resnet152/pytorch/README.md index 93d9557577d59b7ea38f3589db76646266e12e1f..aff3dcb71a1704cf72db773091bf9a0fe7d23e47 100644 --- a/cv/classification/eca_resnet152/pytorch/README.md +++ b/cv/classification/eca_resnet152/pytorch/README.md @@ -9,6 +9,13 @@ superior accuracy in image classification tasks compared to standard ResNet152, complex visual recognition problems. Its architecture balances performance and efficiency, making it suitable for various computer vision applications. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/efficientnet_b0/paddlepaddle/README.md b/cv/classification/efficientnet_b0/paddlepaddle/README.md index 28416892241454903c6d050924233a8cdbfb8dc5..c74ec56669f78f1ff29f823997732c3b8a839cbc 100644 --- a/cv/classification/efficientnet_b0/paddlepaddle/README.md +++ b/cv/classification/efficientnet_b0/paddlepaddle/README.md @@ -9,6 +9,13 @@ convolution (MBConv) blocks with squeeze-and-excitation optimization. EfficientN mobile and edge devices, offering high accuracy in image classification tasks while maintaining low computational requirements. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/efficientnet_b4/pytorch/README.md b/cv/classification/efficientnet_b4/pytorch/README.md index 91585de19d38c2da3ba6e53a700acf5f2784a92e..84952a164b74c58648d0d27443e36b486fe08dde 100755 --- a/cv/classification/efficientnet_b4/pytorch/README.md +++ b/cv/classification/efficientnet_b4/pytorch/README.md @@ -8,6 +8,13 @@ superior accuracy compared to smaller EfficientNet variants. The model maintains more complex visual recognition tasks. EfficientNetB4 is particularly effective for high-accuracy image classification scenarios where computational resources are available, offering a good trade-off between performance and efficiency. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/fasternet/pytorch/README.md b/cv/classification/fasternet/pytorch/README.md index 09b0e132a99364f826f09de61b63912462b1430d..ff6686ff929d5443445c2e0ab03f4bc6e4943caa 100644 --- a/cv/classification/fasternet/pytorch/README.md +++ b/cv/classification/fasternet/pytorch/README.md @@ -9,6 +9,13 @@ and speed. Its innovative architecture makes it particularly effective for mobil resources are limited. The model demonstrates state-of-the-art results in various computer vision tasks while maintaining low latency. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.06 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/googlenet/paddlepaddle/README.md b/cv/classification/googlenet/paddlepaddle/README.md index 61b77aba66ed6f661a33c79a8ea2cdcb6bf16630..5b7a802392a79b01dd9a883e765f67cbbd7b59f3 100644 --- a/cv/classification/googlenet/paddlepaddle/README.md +++ b/cv/classification/googlenet/paddlepaddle/README.md @@ -8,6 +8,13 @@ extraction at various scales. The network uses 1x1 convolutions for dimensionali efficient. GoogLeNet achieved state-of-the-art performance in image classification tasks while maintaining relatively low computational complexity. Its innovative design has influenced many subsequent CNN architectures in computer vision. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.3.0 | 22.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/googlenet/pytorch/README.md b/cv/classification/googlenet/pytorch/README.md index 759dd4d2855eba7ab3b42f8088c524e26ce9b2e2..18cc31154b0e179e992eb70533ce44e6026921e9 100755 --- a/cv/classification/googlenet/pytorch/README.md +++ b/cv/classification/googlenet/pytorch/README.md @@ -8,6 +8,13 @@ extraction at various scales. The network uses 1x1 convolutions for dimensionali efficient. GoogLeNet achieved state-of-the-art performance in image classification tasks while maintaining relatively low computational complexity. Its innovative design has influenced many subsequent CNN architectures in computer vision. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/inceptionv3/mindspore/README.md b/cv/classification/inceptionv3/mindspore/README.md index 45c8a0701ea238b2395776248245c70d7108fe1f..d5cd0d63a49f43fa42e125acefe79bc5fc819e8f 100644 --- a/cv/classification/inceptionv3/mindspore/README.md +++ b/cv/classification/inceptionv3/mindspore/README.md @@ -9,6 +9,12 @@ flow and convergence. InceptionV3 achieves state-of-the-art performance in image computational efficiency, making it suitable for various computer vision applications requiring high accuracy and robust feature learning. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/inceptionv3/pytorch/README.md b/cv/classification/inceptionv3/pytorch/README.md index c8f869658ff31c4607dc508d0b1aae2983681832..10760cce61afd381578c03311107bc1640c105bf 100644 --- a/cv/classification/inceptionv3/pytorch/README.md +++ b/cv/classification/inceptionv3/pytorch/README.md @@ -3,6 +3,13 @@ ## Model Description Inception-v3 is a convolutional neural network architecture from the Inception family that makes several improvements including using Label Smoothing, Factorized 7 x 7 convolutions, and the use of an auxiliary classifer to propagate label information lower down the network (along with the use of batch normalization for layers in the sidehead). +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Install Dependencies diff --git a/cv/classification/inceptionv3/tensorflow/README.md b/cv/classification/inceptionv3/tensorflow/README.md index ae20907e70573a37adc4f08e065d505336b9c5ef..ee389f1b720339f5904f4ef1c5d7e34d7b22e147 100644 --- a/cv/classification/inceptionv3/tensorflow/README.md +++ b/cv/classification/inceptionv3/tensorflow/README.md @@ -4,6 +4,13 @@ InceptionV3 is a convolutional neural network architecture from the Inception family that makes several improvements including using Label Smoothing, Factorized 7 x 7 convolutions, and the use of an auxiliary classifer to propagate label information lower down the network (along with the use of batch normalization for layers in the sidehead). +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Install Dependencies diff --git a/cv/classification/inceptionv4/pytorch/README.md b/cv/classification/inceptionv4/pytorch/README.md index 625b75f1819666fab9f146ae984ddeb2f3196c16..b13ee0ec03db86a1fe5bd0c77b06621c3bb88c76 100644 --- a/cv/classification/inceptionv4/pytorch/README.md +++ b/cv/classification/inceptionv4/pytorch/README.md @@ -9,6 +9,13 @@ InceptionV4 demonstrates improved accuracy over its predecessors while maintaini suitable for various computer vision applications. Its design focuses on optimizing network structure for better feature representation and classification performance. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/internimage/pytorch/README.md b/cv/classification/internimage/pytorch/README.md index 6f61e8052d2bbea8f59fb55b94fb0fa36d62a723..31ae1ee0d2a3ff6a66b691a632845e1c3aaa1457 100644 --- a/cv/classification/internimage/pytorch/README.md +++ b/cv/classification/internimage/pytorch/README.md @@ -9,6 +9,12 @@ InternImage demonstrates exceptional scalability and efficiency, making it suita general image recognition to complex autonomous driving perception systems. Its design focuses on balancing model capacity with computational efficiency. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/lenet/pytorch/README.md b/cv/classification/lenet/pytorch/README.md index a6b0bcd6f0ce3b11a85452e62b4f51eb888c7440..98614a7189a57b0c247856fe8b4f23b9d796ab47 100755 --- a/cv/classification/lenet/pytorch/README.md +++ b/cv/classification/lenet/pytorch/README.md @@ -8,6 +8,13 @@ for modern deep learning. Designed for the MNIST dataset, LeNet demonstrated the recognition tasks. Its simple yet effective architecture inspired subsequent networks like AlexNet and VGG, making it a cornerstone in the evolution of deep learning for computer vision applications. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/mobilenetv2/pytorch/README.md b/cv/classification/mobilenetv2/pytorch/README.md index 3f17237933d34059d3bcf050d0183b610c1473cb..971367ea57fa3ef6f92002931ab65277354ec38e 100644 --- a/cv/classification/mobilenetv2/pytorch/README.md +++ b/cv/classification/mobilenetv2/pytorch/README.md @@ -8,6 +8,13 @@ computational complexity. This architecture maintains high accuracy while signif latency compared to traditional CNNs. MobileNetV2's design focuses on balancing performance and efficiency, making it ideal for real-time applications on resource-constrained devices like smartphones and IoT devices. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/mobilenetv3/mindspore/README.md b/cv/classification/mobilenetv3/mindspore/README.md index 068999ce2d07dbbb737d5fc6a059cca1d88e6df3..fdb6da7775ba0d3d6108e30b8aa960237539c9ef 100644 --- a/cv/classification/mobilenetv3/mindspore/README.md +++ b/cv/classification/mobilenetv3/mindspore/README.md @@ -9,6 +9,12 @@ mobile vision tasks, offering variants for different computational budgets. Its power consumption, making it ideal for real-time applications on resource-constrained devices like smartphones and embedded systems. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/mobilenetv3/paddlepaddle/README.md b/cv/classification/mobilenetv3/paddlepaddle/README.md index a0dd9fba3caf8a0fcfe0f12053af954427b525af..dc0601cc30bc3ae8337370fd5f5e443b09563127 100644 --- a/cv/classification/mobilenetv3/paddlepaddle/README.md +++ b/cv/classification/mobilenetv3/paddlepaddle/README.md @@ -9,6 +9,13 @@ mobile vision tasks, offering variants for different computational budgets. Its power consumption, making it ideal for real-time applications on resource-constrained devices like smartphones and embedded systems. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.3.0 | 22.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/mobilenetv3/pytorch/README.md b/cv/classification/mobilenetv3/pytorch/README.md index 29c69a3420c79449d4cf6b05744c5883ad1bb9f4..90c11c71f489c0e0c17acb93a1c11ef4619bfae7 100644 --- a/cv/classification/mobilenetv3/pytorch/README.md +++ b/cv/classification/mobilenetv3/pytorch/README.md @@ -9,6 +9,13 @@ mobile vision tasks, offering variants for different computational budgets. Its power consumption, making it ideal for real-time applications on resource-constrained devices like smartphones and embedded systems. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/mobilenetv3_large_x1_0/paddlepaddle/README.md b/cv/classification/mobilenetv3_large_x1_0/paddlepaddle/README.md index f76bf582cad52cb93cdd44f90e92a25d748d838f..223db00a4e68f839debec73ad5b65aeca553742a 100644 --- a/cv/classification/mobilenetv3_large_x1_0/paddlepaddle/README.md +++ b/cv/classification/mobilenetv3_large_x1_0/paddlepaddle/README.md @@ -9,6 +9,13 @@ accuracy on ImageNet. Its design focuses on reducing latency while maintaining p mobile applications. MobileNetV3_large_x1_0 serves as a general-purpose backbone for various computer vision tasks on resource-constrained devices. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/mobileone/pytorch/README.md b/cv/classification/mobileone/pytorch/README.md index f1770469ee1b134585ca992c4498722eeed4276f..c283b31beca33c4149dac03106f18e69456b0df6 100644 --- a/cv/classification/mobileone/pytorch/README.md +++ b/cv/classification/mobileone/pytorch/README.md @@ -8,6 +8,13 @@ speed on mobile chips. Achieving under 1ms inference time on iPhone 12 with 75.9 outperforms other efficient architectures in both speed and accuracy. It's versatile for tasks like image classification, object detection, and segmentation, making it ideal for mobile deployment. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/mocov2/pytorch/README.md b/cv/classification/mocov2/pytorch/README.md index 9c81bd97734f1228594b81d5952c5fef6378a29d..278c8a4cdc840795ffbf15d60512402a8d9659c0 100644 --- a/cv/classification/mocov2/pytorch/README.md +++ b/cv/classification/mocov2/pytorch/README.md @@ -8,6 +8,13 @@ techniques to boost performance without requiring large batch sizes. This approa from unlabeled data, establishing strong baselines for self-supervised learning. MoCoV2 outperforms previous methods like SimCLR while maintaining computational efficiency, making it accessible for various computer vision tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/pp-lcnet/paddlepaddle/README.md b/cv/classification/pp-lcnet/paddlepaddle/README.md index 04f937853bef5ab4ecba5309cd78668c6601d9c3..c1ab063eb21d4d55d2474755c78fce27b7bd72fc 100644 --- a/cv/classification/pp-lcnet/paddlepaddle/README.md +++ b/cv/classification/pp-lcnet/paddlepaddle/README.md @@ -9,6 +9,13 @@ vision applications like object detection and semantic segmentation. PP-LCNet's with minimal computational overhead, making it ideal for resource-constrained environments requiring fast and efficient inference. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/repmlp/pytorch/README.md b/cv/classification/repmlp/pytorch/README.md index fbc158f2db98a354c205ef5bb2a067d0bfbe290c..bdfb39ffa2ae42f49d598fbfdfa0f12dee6bae99 100644 --- a/cv/classification/repmlp/pytorch/README.md +++ b/cv/classification/repmlp/pytorch/README.md @@ -9,6 +9,13 @@ these components into pure FC layers for inference, achieving both high accuracy architecture is particularly effective for image recognition tasks, offering a novel approach to balance global and local feature learning. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/repvgg/paddlepaddle/README.md b/cv/classification/repvgg/paddlepaddle/README.md index 0f638266b5c99302bb07955ce278aee04b306ab3..202589185dd21f58939d94d1069094ec7f3933be 100644 --- a/cv/classification/repvgg/paddlepaddle/README.md +++ b/cv/classification/repvgg/paddlepaddle/README.md @@ -9,6 +9,13 @@ state-of-the-art performance in image classification tasks while maintaining hig is particularly suitable for applications requiring both high accuracy and fast inference, making it ideal for real-world deployment scenarios. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/repvgg/pytorch/README.md b/cv/classification/repvgg/pytorch/README.md index 2b23b995fea8822a821c0cd01aade3fcb24a270a..3ce02ff59b3944cce5face11bd17010596f13cf7 100755 --- a/cv/classification/repvgg/pytorch/README.md +++ b/cv/classification/repvgg/pytorch/README.md @@ -9,6 +9,13 @@ state-of-the-art performance in image classification tasks while maintaining hig is particularly suitable for applications requiring both high accuracy and fast inference, making it ideal for real-world deployment scenarios. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/repvit/pytorch/README.md b/cv/classification/repvit/pytorch/README.md index 05c43e7fe44f674a34102cb50cdf95ca24d8d478..8f3990c33625e374c4eb5e3a225f3edeef53b2a8 100644 --- a/cv/classification/repvit/pytorch/README.md +++ b/cv/classification/repvit/pytorch/README.md @@ -8,6 +8,13 @@ latency than lightweight ViTs. RepViT demonstrates state-of-the-art accuracy on inference speeds, making it ideal for resource-constrained applications. Its pure CNN architecture ensures mobile-friendliness, with the largest variant achieving 83.7% accuracy at just 2.3ms latency on an iPhone 12. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/res2net50_14w_8s/paddlepaddle/README.md b/cv/classification/res2net50_14w_8s/paddlepaddle/README.md index c19b2c642274ccb27f0b80073b5492d3a9a91b6d..32b396def2b9182c2bf775f99b56ca79f94fc9ed 100644 --- a/cv/classification/res2net50_14w_8s/paddlepaddle/README.md +++ b/cv/classification/res2net50_14w_8s/paddlepaddle/README.md @@ -8,6 +8,13 @@ improving feature representation. The 14w_8s variant uses 14 width and 8 scales, in image classification tasks. This architecture effectively balances model complexity and computational efficiency, making it suitable for various computer vision applications requiring both high accuracy and efficient processing. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/resnest101/pytorch/README.md b/cv/classification/resnest101/pytorch/README.md index 8177b6a5142013f5c2dcea4acd164231577611c9..84f9fc2b52e52ddc37252b5219bad3b0f56e256b 100644 --- a/cv/classification/resnest101/pytorch/README.md +++ b/cv/classification/resnest101/pytorch/README.md @@ -9,6 +9,13 @@ by effectively balancing computational efficiency and model capacity. ResNeSt101 large-scale visual recognition tasks, offering improved accuracy over standard ResNet variants while maintaining efficient training and inference capabilities. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/resnest14/pytorch/README.md b/cv/classification/resnest14/pytorch/README.md index 230d10c6ae6b5eea31170399116c488b44662b48..a02324aaf5c08f9ae29e3b4ee6e31daeb5afa0d8 100644 --- a/cv/classification/resnest14/pytorch/README.md +++ b/cv/classification/resnest14/pytorch/README.md @@ -9,6 +9,13 @@ complexity and computational efficiency. ResNeSt14's design is particularly suit resources, offering improved accuracy over standard ResNet variants while maintaining fast training and inference capabilities. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/resnest269/pytorch/README.md b/cv/classification/resnest269/pytorch/README.md index d19cd53daadc9523f6efc4e5c383fcaab1f87ef1..059485c9664fe6da1ae5dd76393cd0a505e4de73 100644 --- a/cv/classification/resnest269/pytorch/README.md +++ b/cv/classification/resnest269/pytorch/README.md @@ -9,6 +9,13 @@ by effectively balancing computational efficiency and model capacity. ResNeSt269 large-scale visual recognition tasks, offering improved accuracy over standard ResNet variants while maintaining efficient training and inference capabilities. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/resnest50/paddlepaddle/README.md b/cv/classification/resnest50/paddlepaddle/README.md index 0fae57eefb1e2b64223f3a3f2f624e34d79326f7..1d2ae4c5157e515f055f307f113553e6bf7eea68 100644 --- a/cv/classification/resnest50/paddlepaddle/README.md +++ b/cv/classification/resnest50/paddlepaddle/README.md @@ -9,6 +9,13 @@ balancing computational efficiency and model capacity. ResNeSt50's design is par visual recognition tasks, offering improved accuracy over standard ResNet variants while maintaining efficient training and inference capabilities. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/resnest50/pytorch/README.md b/cv/classification/resnest50/pytorch/README.md index 8f3fba24a893798942b8a7365c69f61e110681ae..b10372fbfe7a43c4bcbe9bd36b5c3cfec9bec139 100644 --- a/cv/classification/resnest50/pytorch/README.md +++ b/cv/classification/resnest50/pytorch/README.md @@ -9,6 +9,13 @@ balancing computational efficiency and model capacity. ResNeSt50's design is par visual recognition tasks, offering improved accuracy over standard ResNet variants while maintaining efficient training and inference capabilities. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/resnet101/pytorch/README.md b/cv/classification/resnet101/pytorch/README.md index ec5c7a7f5bf60040a429180f714b151890b97f31..916b86ff9178e0ff33e3e380f2f10d25b1f9649a 100644 --- a/cv/classification/resnet101/pytorch/README.md +++ b/cv/classification/resnet101/pytorch/README.md @@ -9,6 +9,13 @@ ResNet101 achieves state-of-the-art performance in image classification tasks wh efficiency. Its architecture is widely used as a backbone for various computer vision applications, including object detection and segmentation. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/resnet152/pytorch/README.md b/cv/classification/resnet152/pytorch/README.md index d21818f6624c3c5f65eb88067692e9d9af998114..2edecc7ce0976b23af002938a75090f27e67f446 100644 --- a/cv/classification/resnet152/pytorch/README.md +++ b/cv/classification/resnet152/pytorch/README.md @@ -9,6 +9,13 @@ hierarchical features. ResNet152's architecture is particularly effective for la offering improved accuracy over smaller ResNet variants while maintaining computational efficiency through its residual connections. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/resnet18/pytorch/README.md b/cv/classification/resnet18/pytorch/README.md index e2004f281f073e19a1ff9a8ff93b39c5485808f1..5b6be1875e717d3ff4e7f23ea93711e984264793 100644 --- a/cv/classification/resnet18/pytorch/README.md +++ b/cv/classification/resnet18/pytorch/README.md @@ -8,6 +8,13 @@ problems and allowing for better feature learning. ResNet18 achieves strong perf while maintaining computational efficiency. Its compact architecture makes it suitable for applications with limited resources, serving as a backbone for various computer vision tasks like object detection and segmentation. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/resnet50/paddlepaddle/README.md b/cv/classification/resnet50/paddlepaddle/README.md index 5e029d807d0f6bc645f5953dbf9aefed3296b716..1963a69f52d8b968f27e95e9dbce8eec48ec84e8 100644 --- a/cv/classification/resnet50/paddlepaddle/README.md +++ b/cv/classification/resnet50/paddlepaddle/README.md @@ -8,6 +8,13 @@ gradient problems. This architecture achieved breakthrough performance in image ImageNet competition. ResNet50's efficient design and strong feature extraction capabilities make it widely used in computer vision applications, serving as a backbone for various tasks like object detection and segmentation. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.3.0 | 22.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/resnet50/pytorch/README.md b/cv/classification/resnet50/pytorch/README.md index ce76b5930ce52852327420485f9c6da994696600..bf8583620ed11cf336ce88788198e11ccf3d4b10 100644 --- a/cv/classification/resnet50/pytorch/README.md +++ b/cv/classification/resnet50/pytorch/README.md @@ -8,6 +8,13 @@ gradient problems. This architecture achieved breakthrough performance in image ImageNet competition. ResNet50's efficient design and strong feature extraction capabilities make it widely used in computer vision applications, serving as a backbone for various tasks like object detection and segmentation. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/resnet50/tensorflow/README.md b/cv/classification/resnet50/tensorflow/README.md index d789c558cc06f9804c44a814320f6f96d3422439..75e39525a025791c07a6780a48ac43e914c1461c 100644 --- a/cv/classification/resnet50/tensorflow/README.md +++ b/cv/classification/resnet50/tensorflow/README.md @@ -8,6 +8,13 @@ gradient problems. This architecture achieved breakthrough performance in image ImageNet competition. ResNet50's efficient design and strong feature extraction capabilities make it widely used in computer vision applications, serving as a backbone for various tasks like object detection and segmentation. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/resnext101_32x8d/pytorch/README.md b/cv/classification/resnext101_32x8d/pytorch/README.md index fa33d3a431f11244d97a185ab7ca7c318099e01e..dce3cc2b6bd656fd8084be9672c3bce4655c8b7a 100644 --- a/cv/classification/resnext101_32x8d/pytorch/README.md +++ b/cv/classification/resnext101_32x8d/pytorch/README.md @@ -9,6 +9,13 @@ state-of-the-art performance in image classification tasks by combining the bene multi-branch transformations. Its architecture is particularly effective for large-scale visual recognition tasks, offering improved accuracy over standard ResNet models. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/resnext50_32x4d/mindspore/README.md b/cv/classification/resnext50_32x4d/mindspore/README.md index 6a1ea169b578e3e7d9d93904c85c30fb0273bc08..43309b4e0e4cfbf39bf2d989f38649c76910461e 100644 --- a/cv/classification/resnext50_32x4d/mindspore/README.md +++ b/cv/classification/resnext50_32x4d/mindspore/README.md @@ -8,6 +8,13 @@ representation. The 32x4d variant has 32 groups with 4-dimensional transformatio accuracy than ResNet50 with similar computational complexity, making it efficient for image classification tasks. ResNeXt50's design has influenced many subsequent CNN architectures in computer vision. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/resnext50_32x4d/pytorch/README.md b/cv/classification/resnext50_32x4d/pytorch/README.md index 011b82080e34f0f4288c8800cf89233566661283..868513e33d2eaaeb0e5ab7c8195de5fbf0b1f7a0 100644 --- a/cv/classification/resnext50_32x4d/pytorch/README.md +++ b/cv/classification/resnext50_32x4d/pytorch/README.md @@ -8,6 +8,13 @@ representation. The 32x4d variant has 32 groups with 4-dimensional transformatio accuracy than ResNet50 with similar computational complexity, making it efficient for image classification tasks. ResNeXt50's design has influenced many subsequent CNN architectures in computer vision. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/se_resnet50_vd/paddlepaddle/README.md b/cv/classification/se_resnet50_vd/paddlepaddle/README.md index 88ae57427efde7b63b5fcd84de436d791ba068ca..c8f840a2e8667911def2414e9045e699831d59f7 100644 --- a/cv/classification/se_resnet50_vd/paddlepaddle/README.md +++ b/cv/classification/se_resnet50_vd/paddlepaddle/README.md @@ -8,6 +8,13 @@ variant downsampling preserves more information during feature map reduction. Th than standard ResNet50 while maintaining computational efficiency. SE_ResNet50_vd is particularly effective for image classification tasks, offering improved performance through better feature learning and channel attention mechanisms. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/seresnext/pytorch/README.md b/cv/classification/seresnext/pytorch/README.md index ffc3ffb75795ae3f64d8489bde8317e863597724..1ca557c5a99d7a6fa0ed0eb225bd5d1e84dd006f 100644 --- a/cv/classification/seresnext/pytorch/README.md +++ b/cv/classification/seresnext/pytorch/README.md @@ -9,6 +9,13 @@ each block while maintaining computational efficiency. SEResNeXt achieves state- classification tasks by effectively combining multi-branch transformations with channel-wise attention, making it particularly suitable for complex visual recognition problems. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/shufflenetv2/paddlepaddle/README.md b/cv/classification/shufflenetv2/paddlepaddle/README.md index 00d024039a77a5eab63461249dbe6f00231339bc..6246c4060cf9a8046212f7c8478398548e263467 100644 --- a/cv/classification/shufflenetv2/paddlepaddle/README.md +++ b/cv/classification/shufflenetv2/paddlepaddle/README.md @@ -8,6 +8,13 @@ like FLOPs. The model features a channel split operation and optimized channel s accuracy and inference speed. ShuffleNetv2 achieves state-of-the-art performance in mobile image classification tasks while maintaining low computational complexity, making it ideal for resource-constrained applications. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/shufflenetv2/pytorch/README.md b/cv/classification/shufflenetv2/pytorch/README.md index 1f28a20cafd4a8fd5d5a7bab0d09982dd6fa2e61..679e31bf2db9a74f4e6e4d00d4f73312bd4d0e98 100644 --- a/cv/classification/shufflenetv2/pytorch/README.md +++ b/cv/classification/shufflenetv2/pytorch/README.md @@ -8,6 +8,13 @@ like FLOPs. The model features a channel split operation and optimized channel s accuracy and inference speed. ShuffleNetv2 achieves state-of-the-art performance in mobile image classification tasks while maintaining low computational complexity, making it ideal for resource-constrained applications. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/squeezenet/pytorch/README.md b/cv/classification/squeezenet/pytorch/README.md index b084df03d0517eb8a2e631f8369cff08db3100b5..d4d26fe97352a13feeaf3a0dbed66c5555e3ca92 100644 --- a/cv/classification/squeezenet/pytorch/README.md +++ b/cv/classification/squeezenet/pytorch/README.md @@ -9,6 +9,13 @@ maintaining good classification performance. SqueezeNet is particularly suitable where model size and computational efficiency are critical, offering a balance between accuracy and resource requirements. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/swin_transformer/paddlepaddle/README.md b/cv/classification/swin_transformer/paddlepaddle/README.md index dda047a8de7ab86ebbe55b569d277763fd82bc44..00894d79bc3f5e0f208b6599f8b71b7c6e68ec61 100644 --- a/cv/classification/swin_transformer/paddlepaddle/README.md +++ b/cv/classification/swin_transformer/paddlepaddle/README.md @@ -9,6 +9,13 @@ suitable for both image classification and dense prediction tasks. Swin Transfor performance in various vision tasks, offering a powerful alternative to traditional convolutional networks with its transformer-based approach. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/swin_transformer/pytorch/README.md b/cv/classification/swin_transformer/pytorch/README.md index d62655a1abed909e1c5d84030818bd8f98aa95fa..049d4370226fa12eecce737f246648972b70ec8e 100644 --- a/cv/classification/swin_transformer/pytorch/README.md +++ b/cv/classification/swin_transformer/pytorch/README.md @@ -9,6 +9,13 @@ suitable for both image classification and dense prediction tasks. Swin Transfor performance in various vision tasks, offering a powerful alternative to traditional convolutional networks with its transformer-based approach. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/vgg/paddlepaddle/README.md b/cv/classification/vgg/paddlepaddle/README.md index 44829679ee27e8e12e88318f0c9a488e02a1d70f..19403801b8d6536e36f9d95f3600c3efd0878d4c 100644 --- a/cv/classification/vgg/paddlepaddle/README.md +++ b/cv/classification/vgg/paddlepaddle/README.md @@ -8,6 +8,13 @@ includes 16 or 19 weight layers, with VGG16 being the most popular variant. VGG image classification tasks and became a benchmark for subsequent CNN architectures. Its uniform structure and deep design have influenced many modern deep learning models in computer vision. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.3.0 | 22.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/vgg/pytorch/README.md b/cv/classification/vgg/pytorch/README.md index 34ce4101fe03981088726e2af5fa0f63c09e894f..62504a868539054c8d5152c44ba0f28c7fa2f6c2 100644 --- a/cv/classification/vgg/pytorch/README.md +++ b/cv/classification/vgg/pytorch/README.md @@ -8,6 +8,13 @@ includes 16 or 19 weight layers, with VGG16 being the most popular variant. VGG image classification tasks and became a benchmark for subsequent CNN architectures. Its uniform structure and deep design have influenced many modern deep learning models in computer vision. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/vgg/tensorflow/README.md b/cv/classification/vgg/tensorflow/README.md index 0a1ec7786770d454c246c94296aa35bf5352c4b6..5147b80ff9fdba51e8abde9943b3f7d01b70dab7 100644 --- a/cv/classification/vgg/tensorflow/README.md +++ b/cv/classification/vgg/tensorflow/README.md @@ -8,6 +8,13 @@ includes 16 or 19 weight layers, with VGG16 being the most popular variant. VGG image classification tasks and became a benchmark for subsequent CNN architectures. Its uniform structure and deep design have influenced many modern deep learning models in computer vision. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/wavemlp/pytorch/README.md b/cv/classification/wavemlp/pytorch/README.md index b9a884a00fd1dfdab7cec705d89906d1a9db9ac5..10578219db76f8c617ae612c7a71542f1c40fd23 100644 --- a/cv/classification/wavemlp/pytorch/README.md +++ b/cv/classification/wavemlp/pytorch/README.md @@ -8,6 +8,13 @@ in different images. This approach enhances feature aggregation in pure MLP arch CNNs and transformers in tasks like image classification and object detection. Wave-MLP offers efficient computation while maintaining high accuracy, making it suitable for various computer vision applications. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/wide_resnet101_2/pytorch/README.md b/cv/classification/wide_resnet101_2/pytorch/README.md index 495bec852c2017ade18a6d75a74ee2c9a632b7ea..bdbd900e436c2f07aa1b3e9176af677b4eb512d2 100644 --- a/cv/classification/wide_resnet101_2/pytorch/README.md +++ b/cv/classification/wide_resnet101_2/pytorch/README.md @@ -8,6 +8,13 @@ This architecture achieves superior performance in image classification tasks by efficient training. Wide_ResNet101_2 demonstrates improved accuracy over standard ResNet variants while maintaining computational efficiency, making it suitable for complex visual recognition tasks requiring high performance. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/xception/paddlepaddle/README.md b/cv/classification/xception/paddlepaddle/README.md index c31dd9ef95113b0ac0a3ec101526dbf0ffd33356..cde39ea88a6966fe352e5ad56363541d6144e7ef 100644 --- a/cv/classification/xception/paddlepaddle/README.md +++ b/cv/classification/xception/paddlepaddle/README.md @@ -9,6 +9,13 @@ spatial correlations. The architecture achieves state-of-the-art performance in efficient alternative to traditional CNNs. Its design is particularly suitable for applications requiring both high accuracy and computational efficiency. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/classification/xception/pytorch/README.md b/cv/classification/xception/pytorch/README.md index beb31b487c02b626c8c7b4b04ddab69de122e563..a8293176fb1e234808f3dc4cc588f09b0d6c60fc 100755 --- a/cv/classification/xception/pytorch/README.md +++ b/cv/classification/xception/pytorch/README.md @@ -9,6 +9,13 @@ spatial correlations. The architecture achieves state-of-the-art performance in efficient alternative to traditional CNNs. Its design is particularly suitable for applications requiring both high accuracy and computational efficiency. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/atss_mmdet/pytorch/README.md b/cv/detection/atss_mmdet/pytorch/README.md index f5c0fc7b547aafb9bd6cdc650020ae685c23b20e..a7af4f929a386be32196ee32d85f6e037e59ca88 100644 --- a/cv/detection/atss_mmdet/pytorch/README.md +++ b/cv/detection/atss_mmdet/pytorch/README.md @@ -9,6 +9,13 @@ optimal sample selection thresholds, eliminating the need for manual tuning. Thi and anchor-free detectors, achieving state-of-the-art performance on benchmarks like COCO without additional computational overhead. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.06 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/autoassign/pytorch/README.md b/cv/detection/autoassign/pytorch/README.md index f874fe8bd8e8f9423d9f4a95e29a84e4e62547ad..8769f4370b291e9e788e961b3aadf1a2bc6d42e7 100755 --- a/cv/detection/autoassign/pytorch/README.md +++ b/cv/detection/autoassign/pytorch/README.md @@ -1,78 +1,85 @@ -# AutoAssign - -## Model Description - -AutoAssign is an anchor-free object detection model that introduces a fully differentiable label assignment mechanism. -It combines Center Weighting and Confidence Weighting to adaptively determine positive and negative samples during -training. Center Weighting adjusts category-specific prior distributions, while Confidence Weighting customizes -assignment strategies for each instance. This approach eliminates the need for manual anchor design and achieves -appearance-aware detection through automatic sample selection, resulting in improved performance and reduced human -intervention in the detection process. - -## Model Preparation - -### Prepare Resources - -```bash -mkdir -p data -ln -s /path/to/coco/ ./data - -# Prepare resnet50_msra-5891d200.pth, skip this if fast network -mkdir -p /root/.cache/torch/hub/checkpoints/ -wget https://download.openmmlab.com/pretrain/third_party/resnet50_msra-5891d200.pth -O /root/.cache/torch/hub/checkpoints/resnet50_msra-5891d200.pth -``` - -Go to visit [COCO official website](https://cocodataset.org/#download), then select the COCO dataset you want to -download. - -Take coco2017 dataset as an example, specify `/path/to/coco2017` to your COCO path in later training process, the -unzipped dataset path structure sholud look like: - -```bash -coco2017 -├── annotations -│   ├── instances_train2017.json -│   ├── instances_val2017.json -│ └── ... -├── train2017 -│ ├── 000000000009.jpg -│ ├── 000000000025.jpg -│ └── ... -├── val2017 -│ ├── 000000000139.jpg -│ ├── 000000000285.jpg -│ └── ... -├── train2017.txt -├── val2017.txt -└── ... -``` - -### Install Dependencies - -```bash -# Install libGL -## CentOS -yum install -y mesa-libGL -## Ubuntu -apt install -y libgl1-mesa-glx - -# install MMDetection -git clone https://github.com/open-mmlab/mmdetection.git -b v3.3.0 --depth=1 -cd mmdetection -pip install -v -e . -``` - -## Model Training - -```bash -# One single GPU -python3 tools/train.py configs/autoassign/autoassign_r50-caffe_fpn_1x_coco.py - -# Multiple GPUs on one machine -sed -i 's/python /python3 /g' tools/dist_train.sh -bash tools/dist_train.sh configs/autoassign/autoassign_r50-caffe_fpn_1x_coco.py 8 -``` - -## References - -[mmdetection](https://github.com/open-mmlab/mmdetection) +# AutoAssign + +## Model Description + +AutoAssign is an anchor-free object detection model that introduces a fully differentiable label assignment mechanism. +It combines Center Weighting and Confidence Weighting to adaptively determine positive and negative samples during +training. Center Weighting adjusts category-specific prior distributions, while Confidence Weighting customizes +assignment strategies for each instance. This approach eliminates the need for manual anchor design and achieves +appearance-aware detection through automatic sample selection, resulting in improved performance and reduced human +intervention in the detection process. + +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + +## Model Preparation + +### Prepare Resources + +```bash +mkdir -p data +ln -s /path/to/coco/ ./data + +# Prepare resnet50_msra-5891d200.pth, skip this if fast network +mkdir -p /root/.cache/torch/hub/checkpoints/ +wget https://download.openmmlab.com/pretrain/third_party/resnet50_msra-5891d200.pth -O /root/.cache/torch/hub/checkpoints/resnet50_msra-5891d200.pth +``` + +Go to visit [COCO official website](https://cocodataset.org/#download), then select the COCO dataset you want to +download. + +Take coco2017 dataset as an example, specify `/path/to/coco2017` to your COCO path in later training process, the +unzipped dataset path structure sholud look like: + +```bash +coco2017 +├── annotations +│   ├── instances_train2017.json +│   ├── instances_val2017.json +│ └── ... +├── train2017 +│ ├── 000000000009.jpg +│ ├── 000000000025.jpg +│ └── ... +├── val2017 +│ ├── 000000000139.jpg +│ ├── 000000000285.jpg +│ └── ... +├── train2017.txt +├── val2017.txt +└── ... +``` + +### Install Dependencies + +```bash +# Install libGL +## CentOS +yum install -y mesa-libGL +## Ubuntu +apt install -y libgl1-mesa-glx + +# install MMDetection +git clone https://github.com/open-mmlab/mmdetection.git -b v3.3.0 --depth=1 +cd mmdetection +pip install -v -e . +``` + +## Model Training + +```bash +# One single GPU +python3 tools/train.py configs/autoassign/autoassign_r50-caffe_fpn_1x_coco.py + +# Multiple GPUs on one machine +sed -i 's/python /python3 /g' tools/dist_train.sh +bash tools/dist_train.sh configs/autoassign/autoassign_r50-caffe_fpn_1x_coco.py 8 +``` + +## References + +[mmdetection](https://github.com/open-mmlab/mmdetection) diff --git a/cv/detection/cascade_rcnn_mmdet/pytorch/README.md b/cv/detection/cascade_rcnn_mmdet/pytorch/README.md index e673a73d2d6a3e1166f5acef34d46caa7ef1175e..d22c83b4d0d4d5f3cce18ccfc1c73b84dd1f8854 100644 --- a/cv/detection/cascade_rcnn_mmdet/pytorch/README.md +++ b/cv/detection/cascade_rcnn_mmdet/pytorch/README.md @@ -8,6 +8,13 @@ stage, addressing the paradox of high-quality detection by minimizing overfittin between training and inference. This architecture achieves state-of-the-art performance on various datasets, including COCO, and can be extended to instance segmentation tasks, outperforming models like Mask R-CNN. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.06 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/centermask2/pytorch/README.md b/cv/detection/centermask2/pytorch/README.md index 4d5d8e3ce3750c1abe3e6ea7130cff5f3b812758..3ae78062d6511e0db857e3bf1e24698c0537c7ea 100644 --- a/cv/detection/centermask2/pytorch/README.md +++ b/cv/detection/centermask2/pytorch/README.md @@ -9,6 +9,13 @@ predictions effectively. The model achieves state-of-the-art performance on COCO training and inference capabilities. It's particularly effective for complex scenes with overlapping objects and varying scales. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 4.1.1 | 24.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/centernet/paddlepaddle/README.md b/cv/detection/centernet/paddlepaddle/README.md index afec74dc023590747d9b83cd253e9d66eea0d23e..051c3c6b0e6bb828ebc8c12dae13a76b673d539d 100644 --- a/cv/detection/centernet/paddlepaddle/README.md +++ b/cv/detection/centernet/paddlepaddle/README.md @@ -8,6 +8,13 @@ properties like size and orientation. This approach eliminates the need for anch making it simpler and faster. CenterNet achieves state-of-the-art speed-accuracy trade-offs on benchmarks like COCO and can be extended to 3D detection and pose estimation tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/centernet/pytorch/README.md b/cv/detection/centernet/pytorch/README.md index dd0198cf5547007b95afc8b73307347a275e066a..ee6d7c70cbaebda40050f4ecbf37ceb7eb62b2f8 100644 --- a/cv/detection/centernet/pytorch/README.md +++ b/cv/detection/centernet/pytorch/README.md @@ -8,6 +8,13 @@ properties like size and orientation. This approach eliminates the need for anch making it simpler and faster. CenterNet achieves state-of-the-art speed-accuracy trade-offs on benchmarks like COCO and can be extended to 3D detection and pose estimation tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/co-detr/pytorch/README.md b/cv/detection/co-detr/pytorch/README.md index b73ec656bfebf59748348441713e85b104da4008..eae12ca492fb71f673b4a29b27c22a7e9ebb938a 100644 --- a/cv/detection/co-detr/pytorch/README.md +++ b/cv/detection/co-detr/pytorch/README.md @@ -8,6 +8,13 @@ assignments and optimizes decoder attention using customized positive queries. C performance, being the first model to reach 66.0 AP on COCO test-dev with ViT-L. This approach significantly boosts detection accuracy and efficiency while maintaining end-to-end training simplicity. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/cornernet_mmdet/pytorch/README.md b/cv/detection/cornernet_mmdet/pytorch/README.md index ffaf9595706421e0d74ec2914ad9d261796a23ea..4122e33f1d6d14feab843ba7585673aed9ec7fb0 100644 --- a/cv/detection/cornernet_mmdet/pytorch/README.md +++ b/cv/detection/cornernet_mmdet/pytorch/README.md @@ -9,6 +9,13 @@ addition to our novel formulation, we introduce corner pooling, a new type of po better localize corners. Experiments show that CornerNet achieves a 42.2% AP on MS COCO, outperforming all existing one-stage detectors. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.06 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/dcnv2_mmdet/pytorch/README.md b/cv/detection/dcnv2_mmdet/pytorch/README.md index 8977ca8f0a46a1aaedfa30b513d136f3757e2aff..e001452dfebb054451865db7598bae529fa5cfc0 100644 --- a/cv/detection/dcnv2_mmdet/pytorch/README.md +++ b/cv/detection/dcnv2_mmdet/pytorch/README.md @@ -9,6 +9,13 @@ comprehensively throughout the network, enabling superior adaptation to object g state-of-the-art performance on object detection and instance segmentation tasks, particularly on benchmarks like COCO, while maintaining computational efficiency. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.06 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/detr/paddlepaddle/README.md b/cv/detection/detr/paddlepaddle/README.md index 701b4f8b11a23fade77bbb6f6c1555b3bafef5bb..d0a1792babda85ee672d8ea91f29b86f3e6861b3 100644 --- a/cv/detection/detr/paddlepaddle/README.md +++ b/cv/detection/detr/paddlepaddle/README.md @@ -8,6 +8,13 @@ anchor boxes and non-maximum suppression. DETR uses a transformer encoder-decode and predict object bounding boxes and classes simultaneously. This end-to-end approach simplifies the detection pipeline while achieving competitive performance on benchmarks like COCO, offering a new paradigm for object detection tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/fasterrcnn/pytorch/README.md b/cv/detection/fasterrcnn/pytorch/README.md index 9b071be039bdca5e0ebb433148bf0d056fdf6e2b..ea068f84826edfc0cb8a0aa7d131b15e54c666b8 100644 --- a/cv/detection/fasterrcnn/pytorch/README.md +++ b/cv/detection/fasterrcnn/pytorch/README.md @@ -8,6 +8,13 @@ cost-free region proposals. This architecture significantly improves detection s predecessors. Faster R-CNN achieves excellent performance on benchmarks like PASCAL VOC and COCO, and serves as the foundation for many winning entries in computer vision competitions. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/fcos/paddlepaddle/README.md b/cv/detection/fcos/paddlepaddle/README.md index bede6c790c6dfb3a9f3d494b072291aa73b4df83..872373472d2b480ffb2ec2310918e8df727c8b8f 100644 --- a/cv/detection/fcos/paddlepaddle/README.md +++ b/cv/detection/fcos/paddlepaddle/README.md @@ -8,6 +8,13 @@ bounding boxes and class labels. FCOS simplifies the detection pipeline, reduces competitive performance on benchmarks like COCO. Its center-ness branch helps suppress low-quality predictions, making it efficient and effective for various detection tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/fcos/pytorch/README.md b/cv/detection/fcos/pytorch/README.md index 2236e5764295baa0199242c8b9cbf3aa3b40113c..0a8dc60435b42496003c22470bd4300405a2e25f 100755 --- a/cv/detection/fcos/pytorch/README.md +++ b/cv/detection/fcos/pytorch/README.md @@ -8,6 +8,12 @@ bounding boxes and class labels. FCOS simplifies the detection pipeline, reduces competitive performance on benchmarks like COCO. Its center-ness branch helps suppress low-quality predictions, making it efficient and effective for various detection tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V100 | 3.0.0 | 23.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/mamba_yolo/pytorch/README.md b/cv/detection/mamba_yolo/pytorch/README.md index be6a2484bfa4baae6bab8373b05f0df685e20124..d85c08423606002149ba70cdc5aa3ef01958e162 100644 --- a/cv/detection/mamba_yolo/pytorch/README.md +++ b/cv/detection/mamba_yolo/pytorch/README.md @@ -6,6 +6,12 @@ Mamba-YOLO is an innovative object detection model that integrates State Space M Look Once) architecture to enhance performance in complex visual tasks. This integration aims to improve the model's ability to capture global dependencies and process long-range information efficiently. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.1.1 | 24.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/maskrcnn/paddlepaddle/README.md b/cv/detection/maskrcnn/paddlepaddle/README.md index f98a7cd692c38113ec1cae52ecadffae230890c1..569c30fa357994b5d8601fbe9f63306b7220df8c 100644 --- a/cv/detection/maskrcnn/paddlepaddle/README.md +++ b/cv/detection/maskrcnn/paddlepaddle/README.md @@ -8,6 +8,13 @@ segmentation masks for each instance. Mask R-CNN maintains the two-stage archite fully convolutional network for mask prediction. This model achieves state-of-the-art performance on tasks like object detection, instance segmentation, and human pose estimation. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.3.0 | 22.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/maskrcnn/pytorch/README.md b/cv/detection/maskrcnn/pytorch/README.md index 9361b58f9951bae5d738fedd5c840c106f0c119f..4e6f11d800c0c60d7b8582210fc37ff676924a3a 100644 --- a/cv/detection/maskrcnn/pytorch/README.md +++ b/cv/detection/maskrcnn/pytorch/README.md @@ -8,6 +8,13 @@ segmentation masks for each instance. Mask R-CNN maintains the two-stage archite fully convolutional network for mask prediction. This model achieves state-of-the-art performance on tasks like object detection, instance segmentation, and human pose estimation. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/oc_sort/paddlepaddle/README.md b/cv/detection/oc_sort/paddlepaddle/README.md index 2d8a0559ebd3e351f68bd19440e56303bb8cad67..1b52593d1f132c5fe5e524d0f281bac73e4024e1 100644 --- a/cv/detection/oc_sort/paddlepaddle/README.md +++ b/cv/detection/oc_sort/paddlepaddle/README.md @@ -9,6 +9,13 @@ observation-centric updates, making it more reliable for object tracking in chal flexible for integration with various detectors and matching modules, offering improved accuracy without compromising speed. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/oriented_reppoints/pytorch/README.md b/cv/detection/oriented_reppoints/pytorch/README.md index da3c445d9c8b4915bdc0f7ed23dd05beb3d39946..4baa32c726a9213b825e9f86cf297654fb82b080 100644 --- a/cv/detection/oriented_reppoints/pytorch/README.md +++ b/cv/detection/oriented_reppoints/pytorch/README.md @@ -8,6 +8,13 @@ instances, offering more precise detection than traditional bounding box approac oriented conversion functions for accurate classification and localization, along with a quality assessment scheme to handle cluttered backgrounds. It achieves state-of-the-art performance on aerial datasets like DOTA and HRSC2016. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/picodet/paddlepaddle/README.md b/cv/detection/picodet/paddlepaddle/README.md index cee3cff1fd3b17decd168c8eba95ee2f390c227a..831cf8508b308fdbb36f227156236a059550a5ca 100644 --- a/cv/detection/picodet/paddlepaddle/README.md +++ b/cv/detection/picodet/paddlepaddle/README.md @@ -8,6 +8,13 @@ end-to-end inference by including post-processing in the network, enabling direc achieves an excellent balance between speed and accuracy, making it ideal for applications requiring real-time detection on resource-constrained devices. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/pp-yoloe/paddlepaddle/README.md b/cv/detection/pp-yoloe/paddlepaddle/README.md index eed132eaab66f211e97f7e7f69b056f90b472a40..f796946c8d664d3c1b801253c505f445a51b7874 100644 --- a/cv/detection/pp-yoloe/paddlepaddle/README.md +++ b/cv/detection/pp-yoloe/paddlepaddle/README.md @@ -9,6 +9,13 @@ Convolution, ensuring compatibility with diverse hardware. It achieves excellent suitable for real-time applications. The model's efficient architecture and optimization techniques make it a top choice for object detection tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.3.0 | 22.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/pp_yoloe+/paddlepaddle/README.md b/cv/detection/pp_yoloe+/paddlepaddle/README.md index e724119bc91e3642147d5280597ae2b1f388e1ad..1a8d8fcffe39e999b07a49bc9850c2b93891eb91 100644 --- a/cv/detection/pp_yoloe+/paddlepaddle/README.md +++ b/cv/detection/pp_yoloe+/paddlepaddle/README.md @@ -8,6 +8,13 @@ configurable through width and depth multipliers. PP-YOLOE+ maintains hardware c operators while achieving state-of-the-art speed-accuracy trade-offs. Its optimized architecture makes it ideal for real-time applications, offering superior detection performance across various scenarios and hardware platforms. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.1 | 24.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/pvanet/pytorch/README.md b/cv/detection/pvanet/pytorch/README.md index ef0709db5eae4c9af0a7e21332d3597d173f402b..5385dd1539e49b2bee59c8a5f581ddd38a82057f 100755 --- a/cv/detection/pvanet/pytorch/README.md +++ b/cv/detection/pvanet/pytorch/README.md @@ -8,6 +8,13 @@ channels," incorporating innovations like C.ReLU and Inception structures. PVANe benchmarks with significantly reduced computational requirements compared to heavier networks. Its optimized design makes it suitable for real-time applications where both speed and accuracy are crucial. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/reppoints_mmdet/pytorch/README.md b/cv/detection/reppoints_mmdet/pytorch/README.md index 82370f5b20ad10ed58ad96aca1aa7570097ed5c7..10283fb05e0c630041e9fa2bd612ecfa07c6320b 100644 --- a/cv/detection/reppoints_mmdet/pytorch/README.md +++ b/cv/detection/reppoints_mmdet/pytorch/README.md @@ -8,6 +8,13 @@ that bound objects and indicate semantically significant areas. RepPoints achiev benchmarks while eliminating the need for anchor boxes. Its finer representation enables better object understanding and more accurate detection, particularly for complex shapes and overlapping objects. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.06 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/retinanet/paddlepaddle/README.md b/cv/detection/retinanet/paddlepaddle/README.md index 9a007f676f8be5b62ba8086310d2488015d10bb4..d9281fb9ad98e100605757e90b124d89ceafb1a9 100644 --- a/cv/detection/retinanet/paddlepaddle/README.md +++ b/cv/detection/retinanet/paddlepaddle/README.md @@ -8,6 +8,13 @@ efficiently. RetinaNet achieves high accuracy while maintaining competitive spee detection tasks. Its single-stage architecture combines the accuracy of two-stage detectors with the speed of single-stage approaches, offering an excellent balance between performance and efficiency. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/retinanet/pytorch/README.md b/cv/detection/retinanet/pytorch/README.md index 0b0dd12f7d57c58d4030ebaa8643fa74a1cc5c64..fa7d824ebbf76e7495fdcbc77d2411cec0f4e884 100644 --- a/cv/detection/retinanet/pytorch/README.md +++ b/cv/detection/retinanet/pytorch/README.md @@ -8,6 +8,13 @@ efficiently. RetinaNet achieves high accuracy while maintaining competitive spee detection tasks. Its single-stage architecture combines the accuracy of two-stage detectors with the speed of single-stage approaches, offering an excellent balance between performance and efficiency. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/rt-detr/pytorch/README.md b/cv/detection/rt-detr/pytorch/README.md index 449d226d60efdf3ad56d1f301316c1b80d06aed7..eea8b4b6143463c6b8424b783eb4090d25188b67 100644 --- a/cv/detection/rt-detr/pytorch/README.md +++ b/cv/detection/rt-detr/pytorch/README.md @@ -8,6 +8,13 @@ speed. RT-DETR achieves competitive accuracy with significantly faster inference applications requiring real-time performance. The model preserves the end-to-end detection capabilities of DETR while addressing its computational challenges, offering a practical solution for time-sensitive detection tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 4.1.1 | 24.06 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/rtmdet/pytorch/README.md b/cv/detection/rtmdet/pytorch/README.md index f5feef5d09256f3a90beb4f8e30e922eede8c0e8..c071d995313065ba72d04639c6b8e314f8d9080f 100644 --- a/cv/detection/rtmdet/pytorch/README.md +++ b/cv/detection/rtmdet/pytorch/README.md @@ -8,6 +8,13 @@ achieves state-of-the-art accuracy with exceptional speed, reaching 300+ FPS on sizes for different applications and excels in tasks like instance segmentation and rotated object detection. Its design provides insights for versatile real-time detection systems. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/solov2/paddlepaddle/README.md b/cv/detection/solov2/paddlepaddle/README.md index 8213cdc6bc5ca3a2314281cd3a852f4ffb093d7a..0ee186d75adc7100a74b3170f72e279b3a0978b0 100644 --- a/cv/detection/solov2/paddlepaddle/README.md +++ b/cv/detection/solov2/paddlepaddle/README.md @@ -8,6 +8,13 @@ and faster approach compared to traditional methods. SOLOv2 introduces dynamic c suppression to improve mask quality and processing speed. The model achieves strong performance on instance segmentation tasks while maintaining real-time capabilities, making it suitable for various computer vision applications. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/ssd/mindspore/README.md b/cv/detection/ssd/mindspore/README.md index f114071d85f0e3551b03c7167d81645821a6b3c8..05ebee89866a32a3a46b4776b38811085512445d 100755 --- a/cv/detection/ssd/mindspore/README.md +++ b/cv/detection/ssd/mindspore/README.md @@ -7,6 +7,12 @@ class scores in a single forward pass. It uses a set of default boxes at differe multiple feature maps to detect objects of various sizes. SSD combines predictions from different layers to handle objects at different resolutions, offering a good balance between speed and accuracy for real-time detection tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V100 | 3.0.0 | 23.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/ssd/paddlepaddle/README.md b/cv/detection/ssd/paddlepaddle/README.md index 306f9c2cce8c4b83184d2ce8d0e3fe9baeedcc10..70d27ec7e30905d7d6a1c627653b529b5a897e9f 100644 --- a/cv/detection/ssd/paddlepaddle/README.md +++ b/cv/detection/ssd/paddlepaddle/README.md @@ -7,6 +7,13 @@ class scores in a single forward pass. It uses a set of default boxes at differe multiple feature maps to detect objects of various sizes. SSD combines predictions from different layers to handle objects at different resolutions, offering a good balance between speed and accuracy for real-time detection tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.3.0 | 22.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/ssd/pytorch/README.md b/cv/detection/ssd/pytorch/README.md index 7535f4023424563748662115559465df3f797396..0f918fcfcb8f74246659d8793dae35930948f691 100644 --- a/cv/detection/ssd/pytorch/README.md +++ b/cv/detection/ssd/pytorch/README.md @@ -7,6 +7,12 @@ class scores in a single forward pass. It uses a set of default boxes at differe multiple feature maps to detect objects of various sizes. SSD combines predictions from different layers to handle objects at different resolutions, offering a good balance between speed and accuracy for real-time detection tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/ssd/tensorflow/README.md b/cv/detection/ssd/tensorflow/README.md index 921a0966e6fd07ee8bbd165224b7d0e4488324f4..d317a4c2a85c5154c3abf42f98d33577abd9cb86 100644 --- a/cv/detection/ssd/tensorflow/README.md +++ b/cv/detection/ssd/tensorflow/README.md @@ -7,6 +7,13 @@ class scores in a single forward pass. It uses a set of default boxes at differe multiple feature maps to detect objects of various sizes. SSD combines predictions from different layers to handle objects at different resolutions, offering a good balance between speed and accuracy for real-time detection tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/yolof/pytorch/README.md b/cv/detection/yolof/pytorch/README.md index f71b4de5d8eb095af2e25ded235134173306aebf..5f72a8b0b0cf4abe5b1c610b0de6dd8b8b1450c5 100755 --- a/cv/detection/yolof/pytorch/README.md +++ b/cv/detection/yolof/pytorch/README.md @@ -8,6 +8,13 @@ multi-level approaches. YOLOF introduces two key components: Dilated Encoder for Uniform Matching for balanced positive samples. The model achieves competitive accuracy with RetinaNet while being 2.5x faster, making it suitable for real-time detection tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/yolov10/pytorch/README.md b/cv/detection/yolov10/pytorch/README.md index 4392fa5f32e19ee78ed667c625521e9950b97c01..e88348f255038a7c5c8e6432d6e50277a7177a6c 100644 --- a/cv/detection/yolov10/pytorch/README.md +++ b/cv/detection/yolov10/pytorch/README.md @@ -8,6 +8,13 @@ state-of-the-art performance with reduced computational overhead, offering super various model scales. Built on the Ultralytics framework, it addresses limitations of previous YOLO versions, making it ideal for real-time applications requiring fast and accurate object detection in diverse scenarios. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 4.1.1 | 24.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/yolov3/paddlepaddle/README.md b/cv/detection/yolov3/paddlepaddle/README.md index b7f6ceadc1deb3b7fe9d9e675bf52b6dcd5efaa9..0d3c607beef159535544dbbc7082ad71b66cae68 100644 --- a/cv/detection/yolov3/paddlepaddle/README.md +++ b/cv/detection/yolov3/paddlepaddle/README.md @@ -8,6 +8,13 @@ competitive performance with faster inference times compared to other detectors. pass, making it efficient for real-time applications. The model balances speed and accuracy, making it popular for practical detection tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.3.0 | 22.12 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/yolov3/pytorch/README.md b/cv/detection/yolov3/pytorch/README.md index 7cdc40ca400233a9232c3b8df9386ed24468772b..a16d7f8fe9fcbe56bb29e2607fbce03b9093a01c 100755 --- a/cv/detection/yolov3/pytorch/README.md +++ b/cv/detection/yolov3/pytorch/README.md @@ -8,6 +8,13 @@ competitive performance with faster inference times compared to other detectors. pass, making it efficient for real-time applications. The model balances speed and accuracy, making it popular for practical detection tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/yolov3/tensorflow/README.md b/cv/detection/yolov3/tensorflow/README.md index cfed87ca2d4e5a7e3662a976ae77ae0ec5d5d5a2..6a75348e5c9410c27db4732ee5e624b16e4aeb54 100644 --- a/cv/detection/yolov3/tensorflow/README.md +++ b/cv/detection/yolov3/tensorflow/README.md @@ -8,6 +8,13 @@ competitive performance with faster inference times compared to other detectors. pass, making it efficient for real-time applications. The model balances speed and accuracy, making it popular for practical detection tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/yolov5/paddlepaddle/README.md b/cv/detection/yolov5/paddlepaddle/README.md index dc7b4891528a3eb9468f93754acede200d9c89f6..003cf1455f3c57db1fb004198ced1c403da495f7 100644 --- a/cv/detection/yolov5/paddlepaddle/README.md +++ b/cv/detection/yolov5/paddlepaddle/README.md @@ -7,6 +7,12 @@ accuracy. It features a streamlined design with enhanced data augmentation and a multiple model sizes (n/s/m/l/x) for different performance needs. The model is known for its ease of use, fast training, and efficient inference, making it popular for real-time detection tasks across various applications. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 3.1.1 | 24.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/yolov5/pytorch/README.md b/cv/detection/yolov5/pytorch/README.md index d445942503206fdf1587b163a4cbec7b945af25c..81bdb144f67c6506f0bdf8200f3191ccc4f0dfa0 100644 --- a/cv/detection/yolov5/pytorch/README.md +++ b/cv/detection/yolov5/pytorch/README.md @@ -7,6 +7,13 @@ accuracy. It features a streamlined design with enhanced data augmentation and a multiple model sizes (n/s/m/l/x) for different performance needs. The model is known for its ease of use, fast training, and efficient inference, making it popular for real-time detection tasks across various applications. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 2.2.0 | 22.09 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/yolov6/pytorch/README.md b/cv/detection/yolov6/pytorch/README.md index 8199002884970fcb1ab413f76eacbe0fb29f90f9..b5e3f681a539caab75e1ea8c9a87f048c12127c1 100644 --- a/cv/detection/yolov6/pytorch/README.md +++ b/cv/detection/yolov6/pytorch/README.md @@ -9,6 +9,13 @@ speed. It introduces innovative quantization methods for efficient deployment. T performance compared to other YOLO variants, making it suitable for diverse real-world applications requiring fast and accurate object detection. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.06 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/yolov7/pytorch/README.md b/cv/detection/yolov7/pytorch/README.md index 4685705435b4adaed578ad98fddefd308dd87a7d..6cce016851810e6762186ea16b5bb6b1af2a5431 100644 --- a/cv/detection/yolov7/pytorch/README.md +++ b/cv/detection/yolov7/pytorch/README.md @@ -8,6 +8,13 @@ optimizes model architecture, training strategies, and inference efficiency with model supports various scales for different performance needs and demonstrates exceptional results on COCO benchmarks. Its efficient design makes it suitable for real-world applications requiring fast and accurate object detection. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.03 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/yolov8/pytorch/README.md b/cv/detection/yolov8/pytorch/README.md index 0f209d86473ef05669ce699a9ae4c2888d673806..c7705d4b9a34d8b613f43cb2551869e6aa38b81e 100644 --- a/cv/detection/yolov8/pytorch/README.md +++ b/cv/detection/yolov8/pytorch/README.md @@ -8,6 +8,13 @@ multiple tasks including instance segmentation, pose estimation, and image class efficiency and ease of use, making it suitable for real-time applications. It maintains the YOLO tradition of fast inference while delivering superior detection performance across various scenarios. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.0.0 | 23.06 | + ## Model Preparation ### Prepare Resources diff --git a/cv/detection/yolov9/pytorch/README.md b/cv/detection/yolov9/pytorch/README.md index 2b7c01facb10472a20a38a25f20539ce0758e6f6..3b6b620fd0af4fd6f82ee17b6113d207b4af2807 100644 --- a/cv/detection/yolov9/pytorch/README.md +++ b/cv/detection/yolov9/pytorch/README.md @@ -8,6 +8,13 @@ introduces innovative features that optimize performance across various hardware tradition of real-time detection while delivering superior results in complex scenarios. Its efficient design makes it suitable for applications requiring fast and accurate object recognition in diverse environments. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 4.1.1 | 24.06 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/Yi-1.5-6B/pytorch/README.md b/nlp/llm/Yi-1.5-6B/pytorch/README.md index 9f1b471139eeb6f7671917aef99a931238302a2f..1d0d8e2555e98e7d001432d48961d829850fa15f 100644 --- a/nlp/llm/Yi-1.5-6B/pytorch/README.md +++ b/nlp/llm/Yi-1.5-6B/pytorch/README.md @@ -7,6 +7,12 @@ Targeted as a bilingual language model and trained on 3T multilingual corpus, th strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/Yi-6B/pytorch/README.md b/nlp/llm/Yi-6B/pytorch/README.md index 60a7b1b65450a3c4685587fb9ec8d0f6fe618ea4..5710553cdfa2cc7de1de09d6e013507f407bbf99 100644 --- a/nlp/llm/Yi-6B/pytorch/README.md +++ b/nlp/llm/Yi-6B/pytorch/README.md @@ -4,6 +4,12 @@ The Yi series models are the next generation of open-source large language models trained from scratch by 01.AI. Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/Yi-VL-6B/pytorch/README.md b/nlp/llm/Yi-VL-6B/pytorch/README.md index dd4f9929b6c2b12ec3c40e9b4cddd794366f162f..218c59821825d34f2e8b4ae02a179d2e11f08ade 100644 --- a/nlp/llm/Yi-VL-6B/pytorch/README.md +++ b/nlp/llm/Yi-VL-6B/pytorch/README.md @@ -8,6 +8,12 @@ conversations involving both text and images. Supporting English and Chinese, Yi performance in benchmarks like MMMU and CMMMU. Its ability to process high-resolution images (448×448) and engage in detailed visual question answering makes it a powerful tool for AI-driven image-text analysis and dialogue systems. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/aquila2-34b/pytorch/README.md b/nlp/llm/aquila2-34b/pytorch/README.md index 9c95b4742b5601f0cf0280a9b7ddc27f11b003c2..ef3b0070cd94e4f8cc58c2f1c0b8ee101337aaeb 100644 --- a/nlp/llm/aquila2-34b/pytorch/README.md +++ b/nlp/llm/aquila2-34b/pytorch/README.md @@ -9,6 +9,13 @@ optimizing computational resources. Its architecture enables advanced performanc text generation, summarization, and question answering. The model represents a significant advancement in Chinese language processing, offering improved context understanding and response generation for diverse linguistic tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 3.4.0 | 24.06 | + ## Model Preparation ### Configure 4-node environment diff --git a/nlp/llm/baichuan2-7b/pytorch/README.md b/nlp/llm/baichuan2-7b/pytorch/README.md index b863e06dbf54fd29c8a26e0a4abad19ee661d5f1..6cae758d10a87df7fbb76bdcbaf4978015fbcd9d 100644 --- a/nlp/llm/baichuan2-7b/pytorch/README.md +++ b/nlp/llm/baichuan2-7b/pytorch/README.md @@ -9,6 +9,13 @@ domain-specific applications. Baichuan2-7B's architecture is optimized for effic suitable for both academic research and commercial use. Its open-source nature encourages innovation and development in the field of natural language processing. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 3.4.0 | 24.06 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/bloom-7b1/pytorch/README.md b/nlp/llm/bloom-7b1/pytorch/README.md index 5cb505315a44cee425111788ed350e21d3db8326..97db5ac5dc54932852dc09faca70aa05615653fc 100755 --- a/nlp/llm/bloom-7b1/pytorch/README.md +++ b/nlp/llm/bloom-7b1/pytorch/README.md @@ -7,6 +7,13 @@ data using industrial-scale computational resources. As such, it is able to outp programming languages that is hardly distinguishable from text written by humans. BLOOM can also be instructed to perform text tasks it hasn't been explicitly trained for, by casting them as text generation tasks. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 3.4.0 | 24.06 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/chatglm-6b/pytorch/README.md b/nlp/llm/chatglm-6b/pytorch/README.md index ecec64e6ce7a244f8c8f7080bd2a00720e26a8d9..c7059867210f4713446c06fb7b9aa05622dea525 100644 --- a/nlp/llm/chatglm-6b/pytorch/README.md +++ b/nlp/llm/chatglm-6b/pytorch/README.md @@ -9,6 +9,13 @@ in generating human-like responses, particularly in Chinese QA scenarios. Its tr corpora enables it to handle diverse conversational contexts while maintaining computational efficiency and accessibility for local deployment. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/chatglm2-6b-sft/pytorch/README.md b/nlp/llm/chatglm2-6b-sft/pytorch/README.md index 83e52f6722a639625d267b1a1f136d5675b69b57..46c940b3fa46f1adba8587860b100dabe18fdc92 100644 --- a/nlp/llm/chatglm2-6b-sft/pytorch/README.md +++ b/nlp/llm/chatglm2-6b-sft/pytorch/README.md @@ -8,6 +8,13 @@ fine-tuning with minimal computational resources. Through techniques like model it can operate on GPUs with as little as 7GB of memory. ChatGLM2-6B SFT maintains the original model's bilingual capabilities while offering improved task-specific performance and resource efficiency. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 3.4.0 | 24.06 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/chatglm3-6b/pytorch/README.md b/nlp/llm/chatglm3-6b/pytorch/README.md index a1ac21347f4e2ed601ab556c8e8ff1a3682e7f67..4569ed7c6a2b8e570bdcf9b46086873f73903aee 100644 --- a/nlp/llm/chatglm3-6b/pytorch/README.md +++ b/nlp/llm/chatglm3-6b/pytorch/README.md @@ -9,6 +9,13 @@ efficiency and language understanding. ChatGLM3-6B excels in generating coherent particularly in Chinese dialogue scenarios. Its architecture supports various fine-tuning techniques, making it adaptable for diverse applications while maintaining a low deployment threshold for practical implementation. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 4.1.1 | 24.09 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/deepseek_moe_7b/pytorch/README.md b/nlp/llm/deepseek_moe_7b/pytorch/README.md index 1e6abe9e9edf5b41a67536608046751dbf82a521..3c2c8d50b271c84f48520442241dfba5d7ef8339 100644 --- a/nlp/llm/deepseek_moe_7b/pytorch/README.md +++ b/nlp/llm/deepseek_moe_7b/pytorch/README.md @@ -9,6 +9,13 @@ performance, making it suitable for various natural language processing applicat capabilities in language understanding and generation while maintaining a more compact structure compared to its larger counterpart. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 4.1.1 | 24.12 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/glm-4/pytorch/README.md b/nlp/llm/glm-4/pytorch/README.md index 83b50988780cb367e89ff15d8c822a2819e19d10..66e24fd8ca644520b2402d9637f3d4891fc59ce3 100644 --- a/nlp/llm/glm-4/pytorch/README.md +++ b/nlp/llm/glm-4/pytorch/README.md @@ -9,6 +9,13 @@ languages and demonstrates exceptional performance in multilingual scenarios. De applications, GLM-4-9B represents a significant advancement in large language model technology with its enhanced capabilities and versatility. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 4.2.0 | 25.03 | + ## Model Preparation ### Install Dependencies diff --git a/nlp/llm/gpt2-medium-en/paddlepaddle/README.md b/nlp/llm/gpt2-medium-en/paddlepaddle/README.md index c3e7a5cbb3bbe4d33a767e6e249f14fc90a8fe23..5307a40b583df0f9e80f5051db80d0bc1cee5d0f 100644 --- a/nlp/llm/gpt2-medium-en/paddlepaddle/README.md +++ b/nlp/llm/gpt2-medium-en/paddlepaddle/README.md @@ -9,6 +9,13 @@ generation. Pretrained on a vast corpus of English text, it demonstrates strong generating coherent, contextually relevant text. The model's architecture enables it to capture long-range dependencies in text, making it versatile for diverse language processing applications. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/llama-7b/pytorch/README.md b/nlp/llm/llama-7b/pytorch/README.md index f4d6836d981b6f7fd166df4f872d50a863085e1b..4b1151da14123f76d9840b1dfb928048513d9ee3 100644 --- a/nlp/llm/llama-7b/pytorch/README.md +++ b/nlp/llm/llama-7b/pytorch/README.md @@ -9,6 +9,12 @@ generation, question answering, and sentence completion. Its efficient architect while maintaining computational feasibility, making it a versatile tool for various NLP applications and research in language model development. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V100 | 3.1.0 | 23.09 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/llama2-13b/pytorch/README.md b/nlp/llm/llama2-13b/pytorch/README.md index 1e76ae223ad64369d948ce5185437d15ffbda136..9825486fa3c94eb544d6802716796cf55cac4898 100644 --- a/nlp/llm/llama2-13b/pytorch/README.md +++ b/nlp/llm/llama2-13b/pytorch/README.md @@ -6,6 +6,13 @@ Llama 2 is a large language model released by Meta in 2023, with parameters rang the training corpus of Llama 2 is 40% longer, and the context length has been upgraded from 2048 to 4096, allowing for understanding and generating longer texts. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 3.4.0 | 24.06 | + ## Model Preparation ### Configure 2-node environment diff --git a/nlp/llm/llama2-34b/pytorch/README.md b/nlp/llm/llama2-34b/pytorch/README.md index aa3c3bd5e2fd9b26577d576a6dffbb968a20974d..bf40e2d9a759b76782393c5f35908c345ac2bcdd 100644 --- a/nlp/llm/llama2-34b/pytorch/README.md +++ b/nlp/llm/llama2-34b/pytorch/README.md @@ -8,6 +8,13 @@ better understanding of longer texts. This model excels in various natural langu comprehension, and dialogue. Its enhanced architecture and training methodology make it a versatile tool for AI applications, offering state-of-the-art performance in language understanding and generation. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 3.4.0 | 24.06 | + ## Model Preparation ### Configure 4-node environment diff --git a/nlp/llm/llama2-7b/pytorch/README.md b/nlp/llm/llama2-7b/pytorch/README.md index fda863dc7e8532a24caf85274d85b550f294a463..1455bf72da790e8cf3e006e3d04247127a9e2541 100644 --- a/nlp/llm/llama2-7b/pytorch/README.md +++ b/nlp/llm/llama2-7b/pytorch/README.md @@ -8,6 +8,13 @@ better understanding of longer texts. This model excels in various natural langu comprehension. Its enhanced architecture and training methodology make it a powerful tool for AI applications while maintaining computational efficiency compared to larger models in the Llama-2 series. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.0 | 23.12 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/llama2-7b_reward_sft/pytorch/README.md b/nlp/llm/llama2-7b_reward_sft/pytorch/README.md index 7cfaa1525ff262ab8cc133f2c56c89596eeab454..bfbb89f10a2c4bdc75406afb74efe6eb27268b6c 100644 --- a/nlp/llm/llama2-7b_reward_sft/pytorch/README.md +++ b/nlp/llm/llama2-7b_reward_sft/pytorch/README.md @@ -9,6 +9,13 @@ excels in understanding and generating coherent, contextually relevant responses efficient training and inference, making it a powerful tool for developing high-quality conversational AI systems while maintaining computational efficiency. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.1 | 24.03 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/llama2-7b_rlhf/pytorch/README.md b/nlp/llm/llama2-7b_rlhf/pytorch/README.md index 1b18409e7a37769601e1b7530023c59e460c80a5..3ac2d9895605bdc727c5e1e16466058c3e20097e 100644 --- a/nlp/llm/llama2-7b_rlhf/pytorch/README.md +++ b/nlp/llm/llama2-7b_rlhf/pytorch/README.md @@ -11,6 +11,13 @@ to larger models. **Notion: You would better to fine-tune this two models, then do RLHF training as below. So that can get good training result.** +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 3.4.0 | 24.06 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/llama2-7b_sft/pytorch/README.md b/nlp/llm/llama2-7b_sft/pytorch/README.md index 4020a1a61f7ccae1e3ed415dfad9a97753939012..db92037f91f6fa598a304f15e6ff62feae40a7de 100644 --- a/nlp/llm/llama2-7b_sft/pytorch/README.md +++ b/nlp/llm/llama2-7b_sft/pytorch/README.md @@ -9,6 +9,13 @@ making it particularly effective for applications requiring precise language und combines the foundational capabilities of Llama-2 with task-specific optimizations, offering improved performance while maintaining computational efficiency. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V100 | 3.1.1 | 24.03 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/llama3_8b/pytorch/README.md b/nlp/llm/llama3_8b/pytorch/README.md index 1f7a136e8e85881bc0b8b49f045e3def1d6a1f75..2638ea20d5bdd12d32639560ed1e596191a42f07 100644 --- a/nlp/llm/llama3_8b/pytorch/README.md +++ b/nlp/llm/llama3_8b/pytorch/README.md @@ -9,6 +9,13 @@ incorporates supervised fine-tuning (SFT) and reinforcement learning with human preferences, ensuring both helpfulness and safety in its responses. Llama3-8B offers state-of-the-art performance in language understanding and generation. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 4.1.1 | 24.09 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/llama3_8b_sft/pytorch/README.md b/nlp/llm/llama3_8b_sft/pytorch/README.md index f578e775579411110041a301b9332b48a7528282..75ce851dcea1c32428a772f4a4bcdf0dac63f960 100644 --- a/nlp/llm/llama3_8b_sft/pytorch/README.md +++ b/nlp/llm/llama3_8b_sft/pytorch/README.md @@ -9,6 +9,13 @@ particularly effective for applications requiring precise language understanding foundational capabilities of Llama3 with task-specific optimizations, offering improved performance while maintaining computational efficiency. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 4.1.1 | 24.12 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/mamba-2/pytorch/README.md b/nlp/llm/mamba-2/pytorch/README.md index 5a8def87705fe597352a1b4c8de8f60c8894491b..053fe20a5e8ac238410dc8730a89d095546eb505 100644 --- a/nlp/llm/mamba-2/pytorch/README.md +++ b/nlp/llm/mamba-2/pytorch/README.md @@ -7,6 +7,12 @@ Transformer-based large language models (LLMs). It is the second version of the of its predecessor by offering faster inference, improved scalability for long sequences, and lower computational complexity. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.1.1 | 24.12 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/minicpm/pytorch/README.md b/nlp/llm/minicpm/pytorch/README.md index 7df56801f0d28734d3457899565cc06dd621fd4c..ae71d5bad6955abf28a5b92d066031b6d83d90bd 100644 --- a/nlp/llm/minicpm/pytorch/README.md +++ b/nlp/llm/minicpm/pytorch/README.md @@ -9,6 +9,13 @@ Falcon-40B. Furthermore, on the MT-Bench, currently the closest benchmark to use many representative open-source large language models, including Llama2-70B-Chat, Vicuna-33B, Mistral-7B-Instruct-v0.1, and Zephyr-7B-alpha. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 4.2.0 | 25.03 | + ## Model Preparation ### Install Dependencies diff --git a/nlp/llm/mixtral/pytorch/README.md b/nlp/llm/mixtral/pytorch/README.md index f2b009e62c308452ae761dff28e37c6dd663c717..b30cce7ebc1364d81e65e23db6277f569a3bc409 100644 --- a/nlp/llm/mixtral/pytorch/README.md +++ b/nlp/llm/mixtral/pytorch/README.md @@ -6,6 +6,13 @@ The Mixtral model is a Mixture of Experts (MoE)-based large language model devel company focusing on open-source AI models. Mixtral is designed to achieve high performance while maintaining computational efficiency, making it an excellent choice for real-world applications. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 4.1.1 | 24.12 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/phi-3/pytorch/README.md b/nlp/llm/phi-3/pytorch/README.md index 1c2264178299d3d1f2c02f9e57e9ce57d8d4c0aa..70a8c7f813556ee5329d89c66772a9a435c7cf2f 100644 --- a/nlp/llm/phi-3/pytorch/README.md +++ b/nlp/llm/phi-3/pytorch/README.md @@ -9,6 +9,13 @@ class, offering a balance between computational efficiency and capability. Their architecture make them ideal for applications requiring lightweight yet powerful language processing solutions across diverse domains. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 4.2.0 | 25.03 | + ## Model Preparation ### Install Dependencies diff --git a/nlp/llm/qwen-7b/pytorch/README.md b/nlp/llm/qwen-7b/pytorch/README.md index 18b476a72bd06c49068b8cdac0aa7ba1bb52aa53..f31f50d27d8dd0d6422937d60261e4ec2c13614e 100644 --- a/nlp/llm/qwen-7b/pytorch/README.md +++ b/nlp/llm/qwen-7b/pytorch/README.md @@ -7,6 +7,13 @@ Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 3.4.0 | 24.06 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/qwen1.5-14b/pytorch/README.md b/nlp/llm/qwen1.5-14b/pytorch/README.md index 140281577b3af8dcbfcf590d2d3c1c3d5098e6af..e9be99f0e88af208ecc306778364abcb0f5fe6ed 100644 --- a/nlp/llm/qwen1.5-14b/pytorch/README.md +++ b/nlp/llm/qwen1.5-14b/pytorch/README.md @@ -8,6 +8,13 @@ data. In comparison with the previous released Qwen, the improvements include:8 Chat models;Multilingual support of both base and chat models;Stable support of 32K context length for models of all sizes;No need of trust_remote_code. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 4.1.1 | 24.09 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/qwen1.5-7b/pytorch/README.md b/nlp/llm/qwen1.5-7b/pytorch/README.md index bc822f259d6b3ea10f112a6ede8dc19cf8f76de7..d123ab33946cfcaecdb903fe54719b6a576230bd 100644 --- a/nlp/llm/qwen1.5-7b/pytorch/README.md +++ b/nlp/llm/qwen1.5-7b/pytorch/README.md @@ -8,6 +8,13 @@ data. In comparison with the previous released Qwen, the improvements include:8 Chat models;Multilingual support of both base and chat models;Stable support of 32K context length for models of all sizes;No need of trust_remote_code. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 4.1.1 | 24.09 | + ## Model Preparation ### Prepare Resources diff --git a/nlp/llm/qwen2.5-7b/pytorch/README.md b/nlp/llm/qwen2.5-7b/pytorch/README.md index 035da33c05bf72bcfa1b6f228df3159391b79f03..b2a150d21112a6346b2114664c816a8063fafb1b 100644 --- a/nlp/llm/qwen2.5-7b/pytorch/README.md +++ b/nlp/llm/qwen2.5-7b/pytorch/README.md @@ -8,6 +8,13 @@ lengths up to 128K tokens and generates outputs up to 8K tokens. The model excel languages and demonstrates robust performance in instruction following and role-play scenarios. Qwen2.5's optimized architecture and specialized expert models make it a versatile tool for diverse AI applications. +## Supported Environments + +| GPU | [IXUCA SDK](https://gitee.com/deep-spark/deepspark#%E5%A4%A9%E6%95%B0%E6%99%BA%E7%AE%97%E8%BD%AF%E4%BB%B6%E6%A0%88-ixuca) | Release | +|--------|-----------|---------| +| BI-V150 | 4.2.0 | 25.03 | +| BI-V150 | 4.1.1 | 24.12 | + ## Model Preparation ### Prepare Resources