From 066ea3ccc8ab7c6aee8b45308dc649591ac6d1cb Mon Sep 17 00:00:00 2001 From: majorli Date: Mon, 18 Dec 2023 12:29:50 +0800 Subject: [PATCH] fixed 9950eba from https://gitee.com/majorli6/deepsparkhub/pulls/293 bugfix: correct mobilenetv3 model training script name Signed-off-by: majorli --- .../inceptionv3/pytorch/README.md | 18 ++++----- .../mobilenetv3/pytorch/README.md | 18 ++++----- cv/classification/vgg/pytorch/README.md | 37 ++++++++++++------- 3 files changed, 41 insertions(+), 32 deletions(-) diff --git a/cv/classification/inceptionv3/pytorch/README.md b/cv/classification/inceptionv3/pytorch/README.md index 84ba8fac..731ae811 100644 --- a/cv/classification/inceptionv3/pytorch/README.md +++ b/cv/classification/inceptionv3/pytorch/README.md @@ -3,12 +3,14 @@ ## Model description Inception-v3 is a convolutional neural network architecture from the Inception family that makes several improvements including using Label Smoothing, Factorized 7 x 7 convolutions, and the use of an auxiliary classifer to propagate label information lower down the network (along with the use of batch normalization for layers in the sidehead). -## Step 1: Installing +## Step 1: Installation ```bash pip3 install -r requirements.txt ``` +## Step 2: Preparing datasets + Sign up and login in [ImageNet official website](https://www.image-net.org/index.php), then choose 'Download' to download the whole ImageNet dataset. Specify `/path/to/imagenet` to your ImageNet path in later training process. The ImageNet dataset path structure should look like: @@ -27,20 +29,16 @@ imagenet └── val_list.txt ``` -:beers: Done! - -## Step 2: Training -### Multiple GPUs on one machine (AMP) +## Step 3: Training -Set data path by `export DATA_PATH=/path/to/imagenet`. The following command uses all cards to train: ```bash +# Set data path +export DATA_PATH=/path/to/imagenet + +# Multiple GPUs on one machine (AMP) bash train_inception_v3_amp_dist.sh ``` -:beers: Done! - - - ## Reference - [torchvision](https://github.com/pytorch/vision/tree/main/references/classification) diff --git a/cv/classification/mobilenetv3/pytorch/README.md b/cv/classification/mobilenetv3/pytorch/README.md index 0580a8cf..1e83d9b7 100644 --- a/cv/classification/mobilenetv3/pytorch/README.md +++ b/cv/classification/mobilenetv3/pytorch/README.md @@ -3,11 +3,13 @@ ## Model description MobileNetV3 is a convolutional neural network that is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm, and then subsequently improved through novel architecture advances. Advances include (1) complementary search techniques, (2) new efficient versions of nonlinearities practical for the mobile setting, (3) new efficient network design. -## Step 1: Installing +## Step 1: Installation ```bash pip3 install -r requirements.txt ``` +## Step 2: Preparing datasets + Sign up and login in [ImageNet official website](https://www.image-net.org/index.php), then choose 'Download' to download the whole ImageNet dataset. Specify `/path/to/imagenet` to your ImageNet path in later training process. The ImageNet dataset path structure should look like: @@ -26,17 +28,15 @@ imagenet └── val_list.txt ``` -:beers: Done! - -## Step 2: Training -### Multiple GPUs on one machine (AMP) -Set data path by `export DATA_PATH=/path/to/imagenet`. The following command uses all cards to train: +## Step 3: Training ```bash -bash train_mobilenet_v3_large_dist.sh -``` +# Set data path +export DATA_PATH=/path/to/imagenet -:beers: Done! +# Multiple GPUs on one machine (AMP) +bash train_mobilenet_v3_large_amp_dist.sh +``` ## Reference - [torchvision](https://github.com/pytorch/vision/tree/main/references/classification#mobilenetv3-large--small) diff --git a/cv/classification/vgg/pytorch/README.md b/cv/classification/vgg/pytorch/README.md index dfff6a19..d7be136b 100644 --- a/cv/classification/vgg/pytorch/README.md +++ b/cv/classification/vgg/pytorch/README.md @@ -3,23 +3,39 @@ VGG is a classical convolutional neural network architecture. It was based on an analysis of how to increase the depth of such networks. The network utilises small 3 x 3 filters. Otherwise the network is characterized by its simplicity: the only other components being pooling layers and a fully connected layer. -## Step 1: Preparing +## Step 1: Installation -### Install requirements ```bash pip3 install -r requirements.txt ``` -### Set up dataset path -Sign up and login in [imagenet official website](https://www.image-net.org/index.php), then choose 'Download' to download the whole imagenet dataset. Specify `/path/to/imagenet` to your imagenet path in later training process. -:beers: Done! +## Step 2: Preparing datasets +Sign up and login in [ImageNet official website](https://www.image-net.org/index.php), then choose 'Download' to download the whole ImageNet dataset. Specify `/path/to/imagenet` to your ImageNet path in later training process. -## Step 2: Training -### Multiple GPUs on one machine -Set data path by `export DATA_PATH=/path/to/imagenet`. The following command uses all cards to train: +The ImageNet dataset path structure should look like: ```bash +imagenet +├── train +│ └── n01440764 +│ ├── n01440764_10026.JPEG +│ └── ... +├── train_list.txt +├── val +│ └── n01440764 +│ ├── ILSVRC2012_val_00000293.JPEG +│ └── ... +└── val_list.txt +``` + +## Step 3: Training + +```bash +# Set data path +export DATA_PATH=/path/to/imagenet + +# Multiple GPUs on one machine bash train_vgg16_amp_dist.sh ``` Install zlib-1.2.9 if reports "iZLIB_1.2.9 not found" when run train_vgg16_amp_dist.sh @@ -33,10 +49,5 @@ cd ../ rm -rf zlib-1.2.9.tar.gz zlib-1.2.9/ ``` - - -:beers: Done! - - ## Reference - [torchvision](https://github.com/pytorch/vision/tree/main/references/classification) -- Gitee