diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/LICENSE b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/LICENSE
new file mode 100644
index 0000000000000000000000000000000000000000..6feb7e406956e1966aefd5725032a594c04d9a82
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/LICENSE
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2021 Zhang Bao Quan
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/README.md b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..e3b1fa67c1e3f3388915525bdf0e9406a5b49655
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/README.md
@@ -0,0 +1,109 @@
+# Prototype Completion with Primitive Knowledge for Few-Shot Learning
+This repository contains the code for the paper:
+
+[**Prototype Completion with Primitive Knowledge for Few-Shot Learning**](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Prototype_Completion_With_Primitive_Knowledge_for_Few-Shot_Learning_CVPR_2021_paper.pdf)
+
+Baoquan Zhang, Xutao Li, Yunming Ye, Zhichao Huang, Lisai Zhang
+
+CVPR 2021
+
+
+
+
+### Abstract
+
+Few-shot learning is a challenging task, which aims to learn a classifier for novel classes with few examples. Pre-training based meta-learning methods effectively tackle the problem by pre-training a feature extractor and then fine-tuning it through the nearest centroid based meta-learning. However, results show that the fine-tuning step makes very marginal improvements. In this paper, 1) we figure out the key reason, i.e., in the pre-trained feature space, the base classes already form compact clusters while novel classes spread as groups with large variances, which implies that fine-tuning the feature extractor is less meaningful; 2) instead of fine-tuning the feature extractor, we focus on estimating more representative prototypes during meta-learning. Consequently, we propose a novel prototype completion based meta-learning framework. This framework first introduces primitive knowledge (i.e., class-level part or attribute annotations) and extracts representative attribute features as priors. Then, we design a prototype completion network to learn to complete prototypes with these priors. To avoid the prototype completion error caused by primitive knowledge noises or class differences, we further develop a Gaussian based prototype fusion strategy that combines the mean-based and completed prototypes by exploiting the unlabeled samples. Extensive experiments demonstrate that our method: (i) obtain more accurate prototypes; (ii) outperforms state-of-the-art techniques by $2\% \sim 9\%$ in terms of classification accuracy.
+
+### Citation
+
+If you use this code for your research, please cite our paper:
+```
+@inproceedings{zhang2021prototype,
+ author = {Zhang, Baoquan and Li, Xutao and Ye, Yunming and Huang, Zhichao and Zhang, Lisai},
+ title = {Prototype Completion With Primitive Knowledge for Few-Shot Learning},
+ booktitle = {CVPR},
+ year = {2021},
+ pages = {3754-3762}
+}
+```
+
+## Dependencies
+* Python 3.6
+* [PyTorch 1.1.0](http://pytorch.org)
+
+## Usage
+
+### Installation
+
+1. Clone this repository:
+ ```bash
+ git clone https://github.com/zhangbq-research/Prototype_Completion_for_FSL.git
+ cd Prototype_Completion_for_FSL
+ ```
+2. Download and decompress dataset files: [**miniImageNet**](https://mega.nz/#!rx0wGQyS!96sFlAr6yyv-9QQPCm5OBFbOm4XSD0t-HlmGaT5GaiE) (courtesy of [**Spyros Gidaris**](https://github.com/gidariss/FewShotWithoutForgetting))
+
+3. For the dataset loader, specify the path to the directory. For example, in Prototype_Completion_for_FSL/data/mini_imagenet.py line 30:
+ ```python
+ _MINI_IMAGENET_DATASET_DIR = 'path/to/miniImageNet'
+ ```
+
+### Pre-training
+1. To pre-train a feature extractor on miniImageNet and obtain a good representation for each image:
+ ```bash
+ python main.py --phase pretrain --gpu 0,1,2,3 --save-path "./experiments/meta_part_resnet12_mini" \
+ --head CosineNet --network ResNet --pre_head LinearNet --dataset miniImageNet
+ ```
+
+2. You can experiment with varying classification head by changing '--pre_head' argument to LinearRotateNet.
+
+### Construct primitive knowledge for all classes
+Download the file of [**glove_840b_300d**](https://nlp.stanford.edu/data/glove.840B.300d.zip) and then perform
+```bash
+ python ./prior/make_miniimagenet_primitive_knowledge.py
+```
+
+### Extract prior information from primitive knowledge
+```bash
+ python main.py --phase savepart --gpu 0,1,2,3 --save-path "./experiments/meta_part_resnet12_mini" \
+ --network ResNet --dataset miniImageNet
+```
+
+### Learn to complete prototype
+1. To train ProtoComNet on 5-way 1-shot miniImageNet benchmark:
+```bash
+ python main.py --phase metainfer --gpu 0,1,2,3 --save-path "./experiments/meta_part_resnet12_mini" \
+ --train-shot 1 --val-shot 1 --train-query 15 --val-query 15 --head FuseCosNet --network ResNet --dataset miniImageNet
+```
+2. To train ProtoComNet on 5-way 5-shot miniImageNet benchmark:
+```bash
+ python main.py --phase metainfer --gpu 0,1,2,3 --save-path "./experiments/meta_part_resnet12_mini" \
+ --train-shot 5 --val-shot 5 --train-query 15 --val-query 15 --head FuseCosNet --network ResNet --dataset miniImageNet
+```
+
+### Meta-training
+1. To jointly fine-tune feature extractor and ProtoComNet on 5-way 1-shot miniImageNet benchmark:
+ ```bash
+ python main.py --phase metatrain --gpu 0,1,2,3 --save-path "./experiments/meta_part_resnet12_mini" \
+ --train-shot 1 --val-shot 1 --train-query 15 --val-query 15 --head FuseCosNet --network ResNet --dataset miniImageNet
+ ```
+2. To jointly fine-tune feature extractor and ProtoComNet on 5-way 5-shot miniImageNet benchmark:
+ ```bash
+ python main.py --phase metatrain --gpu 0,1,2,3 --save-path "./experiments/meta_part_resnet12_mini" \
+ --train-shot 5 --val-shot 5 --train-query 15 --val-query 15 --head FuseCosNet --network ResNet --dataset miniImageNet
+ ```
+
+### Meta-testing
+1. To evaluate performance on 5-way 1-shot miniImageNet benchmark:
+ ```bash
+ python main.py --phase metatest --gpu 0,1,2,3 --save-path "./experiments/meta_part_resnet12_mini" \
+ --train-shot 1 --val-shot 1 --train-query 15 --val-query 15 --head FuseCosNet --network ResNet --dataset miniImageNet
+ ```
+2. To evaluate performance on 5-way 5-shot miniImageNet benchmark:
+ ```bash
+ python main.py --phase metatest --gpu 0,1,2,3 --save-path "./experiments/meta_part_resnet12_mini" \
+ --train-shot 5 --val-shot 5 --train-query 15 --val-query 15 --head FuseCosNet --network ResNet --dataset miniImageNet
+ ```
+
+## Acknowledgments
+
+This code is based on the implementations of [**MetaOptNet**](https://github.com/kjunelee/MetaOptNet.git)
diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/__init__.py b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e5f08e7229114c9ac4944386498707e847561c9a
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/__init__.py
@@ -0,0 +1,34 @@
+#
+# BSD 3-Clause License
+#
+# Copyright (c) 2017 xxxx
+# All rights reserved.
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# * Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# * Neither the name of the copyright holder nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+# ============================================================================
+#
+# Implement your code here.
diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/data/__init__.py b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/data/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..79334ea308876dc3274bdce976691e9595798a37
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/data/__init__.py
@@ -0,0 +1,34 @@
+#
+# BSD 3-Clause License
+#
+# Copyright (c) 2017 xxxx
+# All rights reserved.
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# * Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# * Neither the name of the copyright holder nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+# ============================================================================
+#
+
diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/data/mini_imagenet.py b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/data/mini_imagenet.py
new file mode 100644
index 0000000000000000000000000000000000000000..4f8cf4ac0737e4ef742838bcb43e42d2134ba9b7
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/data/mini_imagenet.py
@@ -0,0 +1,591 @@
+#
+# BSD 3-Clause License
+#
+# Copyright (c) 2017 xxxx
+# All rights reserved.
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# * Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# * Neither the name of the copyright holder nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+# ============================================================================
+#
+# Dataloader of Gidaris & Komodakis, CVPR 2018
+# Adapted from:
+# https://github.com/gidariss/FewShotWithoutForgetting/blob/master/dataloader.py
+from __future__ import print_function
+
+import os
+import os.path
+import numpy as np
+import random
+import pickle
+import json
+import math
+
+import torch
+import torch.utils.data as data
+import torchvision
+import torchvision.datasets as datasets
+import torchvision.transforms as transforms
+import torchnet as tnt
+
+import h5py
+
+from PIL import Image
+from PIL import ImageEnhance
+
+from pdb import set_trace as breakpoint
+import torch.npu
+import os
+NPU_CALCULATE_DEVICE = 0
+if os.getenv('NPU_CALCULATE_DEVICE') and str.isdigit(os.getenv('NPU_CALCULATE_DEVICE')):
+ NPU_CALCULATE_DEVICE = int(os.getenv('NPU_CALCULATE_DEVICE'))
+if torch.npu.current_device() != NPU_CALCULATE_DEVICE:
+ torch.npu.set_device(f'npu:{NPU_CALCULATE_DEVICE}')
+
+
+# Set the appropriate paths of the datasets here.
+_MINI_IMAGENET_DATASET_DIR = '../datasets/few_shot_data/MiniImagenet'
+
+def buildLabelIndex(labels):
+ label2inds = {}
+ for idx, label in enumerate(labels):
+ if label not in label2inds:
+ label2inds[label] = []
+ label2inds[label].append(idx)
+
+ return label2inds
+
+
+def load_data(file):
+ try:
+ with open(file, 'rb') as fo:
+ data = pickle.load(fo)
+ return data
+ except:
+ with open(file, 'rb') as f:
+ u = pickle._Unpickler(f)
+ u.encoding = 'latin1'
+ data = u.load()
+ return data
+
+class MiniImageNet(data.Dataset):
+ def __init__(self, phase='train', do_not_use_random_transf=False, use_base=True):
+
+ self.base_folder = 'miniImagenet'
+ assert(phase=='train' or phase=='val' or phase=='test')
+ self.phase = phase
+ self.name = 'MiniImageNet_' + phase
+
+ print('Loading mini ImageNet dataset - phase {0}'.format(phase))
+ file_train_categories_train_phase = os.path.join(
+ _MINI_IMAGENET_DATASET_DIR,
+ 'miniImageNet_category_split_train_phase_train.pickle')
+ file_train_categories_val_phase = os.path.join(
+ _MINI_IMAGENET_DATASET_DIR,
+ 'miniImageNet_category_split_train_phase_val.pickle')
+ file_train_categories_test_phase = os.path.join(
+ _MINI_IMAGENET_DATASET_DIR,
+ 'miniImageNet_category_split_train_phase_test.pickle')
+ file_val_categories_val_phase = os.path.join(
+ _MINI_IMAGENET_DATASET_DIR,
+ 'miniImageNet_category_split_val.pickle')
+ file_test_categories_test_phase = os.path.join(
+ _MINI_IMAGENET_DATASET_DIR,
+ 'miniImageNet_category_split_test.pickle')
+
+ if self.phase=='train':
+ # During training phase we only load the training phase images
+ # of the training categories (aka base categories).
+ data_train = load_data(file_train_categories_train_phase)
+ self.data = data_train['data']
+ self.labels = data_train['labels']
+ self.label2catname = {v:k for k, v in data_train['catname2label'].items()}
+
+ self.label2ind = buildLabelIndex(self.labels)
+ self.labelIds = sorted(self.label2ind.keys())
+ self.num_cats = len(self.labelIds)
+ self.labelIds_base = self.labelIds
+ self.num_cats_base = len(self.labelIds_base)
+
+ elif self.phase=='val' or self.phase=='test':
+ if self.phase=='test':
+ # load data that will be used for evaluating the recognition
+ # accuracy of the base categories.
+ data_base = load_data(file_train_categories_test_phase)
+ # load data that will be use for evaluating the few-shot recogniton
+ # accuracy on the novel categories.
+ data_novel = load_data(file_test_categories_test_phase)
+ else: # phase=='val'
+ # load data that will be used for evaluating the recognition
+ # accuracy of the base categories.
+ data_base = load_data(file_train_categories_val_phase)
+ # load data that will be use for evaluating the few-shot recogniton
+ # accuracy on the novel categories.
+ data_novel = load_data(file_val_categories_val_phase)
+ if use_base:
+ self.data = np.concatenate(
+ [data_base['data'], data_novel['data']], axis=0)
+ self.labels = data_base['labels'] + data_novel['labels']
+ else:
+ self.data = data_novel['data']
+ self.labels = data_novel['labels']
+ self.label2catname = {v: k for k, v in data_novel['catname2label'].items()}
+ self.label2ind = buildLabelIndex(self.labels)
+ self.labelIds = sorted(self.label2ind.keys())
+ self.num_cats = len(self.labelIds)
+
+
+ self.labelIds_base = buildLabelIndex(data_base['labels']).keys()
+ self.labelIds_novel = buildLabelIndex(data_novel['labels']).keys()
+ self.num_cats_base = len(self.labelIds_base)
+ self.num_cats_novel = len(self.labelIds_novel)
+ intersection = set(self.labelIds_base) & set(self.labelIds_novel)
+ assert(len(intersection) == 0)
+ else:
+ raise ValueError('Not valid phase {0}'.format(self.phase))
+
+ mean_pix = [x/255.0 for x in [120.39586422, 115.59361427, 104.54012653]]
+ std_pix = [x/255.0 for x in [70.68188272, 68.27635443, 72.54505529]]
+ normalize = transforms.Normalize(mean=mean_pix, std=std_pix)
+
+ if (self.phase=='test' or self.phase=='val') or (do_not_use_random_transf==True):
+ self.transform = transforms.Compose([
+ lambda x: np.asarray(x),
+ transforms.ToTensor(),
+ normalize
+ ])
+ else:
+ self.transform = transforms.Compose([
+ transforms.RandomCrop(84, padding=8),
+ transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4),
+ transforms.RandomHorizontalFlip(),
+ lambda x: np.asarray(x),
+ transforms.ToTensor(),
+ normalize
+ ])
+
+ def __getitem__(self, index):
+ img, label = self.data[index], self.labels[index]
+ # doing this so that it is consistent with all other datasets
+ # to return a PIL Image
+ img = Image.fromarray(img)
+ if self.transform is not None:
+ img = self.transform(img)
+ return img, label
+
+ def __len__(self):
+ return len(self.data)
+
+
+class MiniImageNetPC(data.Dataset):
+ def __init__(self, phase='train', shot=1, do_not_use_random_transf=False, use_base=True):
+ self.shot = shot
+ self.base_folder = 'miniImagenet'
+ assert (phase == 'train' or phase == 'val' or phase == 'test')
+ self.phase = phase
+ self.name = 'MiniImageNet_' + phase
+
+ print('Loading mini ImageNet dataset - phase {0}'.format(phase))
+ file_train_categories_train_phase = os.path.join(
+ _MINI_IMAGENET_DATASET_DIR,
+ 'miniImageNet_category_split_train_phase_train.pickle')
+ file_train_categories_val_phase = os.path.join(
+ _MINI_IMAGENET_DATASET_DIR,
+ 'miniImageNet_category_split_train_phase_val.pickle')
+ file_train_categories_test_phase = os.path.join(
+ _MINI_IMAGENET_DATASET_DIR,
+ 'miniImageNet_category_split_train_phase_test.pickle')
+ file_val_categories_val_phase = os.path.join(
+ _MINI_IMAGENET_DATASET_DIR,
+ 'miniImageNet_category_split_val.pickle')
+ file_test_categories_test_phase = os.path.join(
+ _MINI_IMAGENET_DATASET_DIR,
+ 'miniImageNet_category_split_test.pickle')
+
+ if self.phase == 'train':
+ # During training phase we only load the training phase images
+ # of the training categories (aka base categories).
+ data_train = load_data(file_train_categories_train_phase)
+ self.data = data_train['data']
+ self.labels = data_train['labels']
+
+ self.label2ind = buildLabelIndex(self.labels)
+ self.labelIds = sorted(self.label2ind.keys())
+ self.num_cats = len(self.labelIds)
+ self.labelIds_base = self.labelIds
+ self.num_cats_base = len(self.labelIds_base)
+
+ elif self.phase == 'val' or self.phase == 'test':
+ if self.phase == 'test':
+ # load data that will be used for evaluating the recognition
+ # accuracy of the base categories.
+ data_base = load_data(file_train_categories_test_phase)
+ # load data that will be use for evaluating the few-shot recogniton
+ # accuracy on the novel categories.
+ data_novel = load_data(file_test_categories_test_phase)
+ else: # phase=='val'
+ # load data that will be used for evaluating the recognition
+ # accuracy of the base categories.
+ data_base = load_data(file_train_categories_val_phase)
+ # load data that will be use for evaluating the few-shot recogniton
+ # accuracy on the novel categories.
+ data_novel = load_data(file_val_categories_val_phase)
+ if use_base:
+ self.data = np.concatenate(
+ [data_base['data'], data_novel['data']], axis=0)
+ self.labels = data_base['labels'] + data_novel['labels']
+ else:
+ self.data = data_novel['data']
+ self.labels = data_novel['labels']
+
+ self.label2ind = buildLabelIndex(self.labels)
+ self.labelIds = sorted(self.label2ind.keys())
+ self.num_cats = len(self.labelIds)
+
+ self.labelIds_base = buildLabelIndex(data_base['labels']).keys()
+ self.labelIds_novel = buildLabelIndex(data_novel['labels']).keys()
+ self.num_cats_base = len(self.labelIds_base)
+ self.num_cats_novel = len(self.labelIds_novel)
+ intersection = set(self.labelIds_base) & set(self.labelIds_novel)
+ assert (len(intersection) == 0)
+ else:
+ raise ValueError('Not valid phase {0}'.format(self.phase))
+
+ mean_pix = [x / 255.0 for x in [120.39586422, 115.59361427, 104.54012653]]
+ std_pix = [x / 255.0 for x in [70.68188272, 68.27635443, 72.54505529]]
+ normalize = transforms.Normalize(mean=mean_pix, std=std_pix)
+
+ if (self.phase == 'test' or self.phase == 'val') or (do_not_use_random_transf == True):
+ self.transform = transforms.Compose([
+ lambda x: np.asarray(x),
+ transforms.ToTensor(),
+ normalize
+ ])
+ else:
+ self.transform = transforms.Compose([
+ transforms.RandomCrop(84, padding=8),
+ transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4),
+ transforms.RandomHorizontalFlip(),
+ lambda x: np.asarray(x),
+ transforms.ToTensor(),
+ normalize
+ ])
+
+ def __getitem__(self, index):
+ img, label = self.data[index], self.labels[index]
+ imgs = [img, ]
+ # labels = [label, ]
+ if self.shot > 1:
+ sel_index = self.label2ind[label][:]
+ sel_index.remove(index)
+ for i in range(self.shot - 1):
+ imgs.append(self.data[sel_index[i]])
+ # labels.append(label)
+ # doing this so that it is consistent with all other datasets
+ # to return a PIL Image
+ trs_imgs = []
+ for img in imgs:
+ img = Image.fromarray(img)
+ if self.transform is not None:
+ img = self.transform(img)
+ trs_imgs.append(img)
+ trs_imgs = torch.stack(trs_imgs, dim=0)
+ return trs_imgs, label
+
+ def __len__(self):
+ return len(self.data)
+
+class FewShotDataloader():
+ def __init__(self,
+ dataset,
+ nKnovel=5, # number of novel categories.
+ nKbase=-1, # number of base categories.
+ nExemplars=1, # number of training examples per novel category.
+ nTestNovel=15*5, # number of test examples for all the novel categories.
+ nTestBase=15*5, # number of test examples for all the base categories.
+ batch_size=1, # number of training episodes per batch.
+ num_workers=4,
+ epoch_size=2000, # number of batches per epoch.
+ ):
+
+ self.dataset = dataset
+ self.phase = self.dataset.phase
+ max_possible_nKnovel = (self.dataset.num_cats_base if self.phase=='train'
+ else self.dataset.num_cats_novel)
+ assert(nKnovel >= 0 and nKnovel < max_possible_nKnovel)
+ self.nKnovel = nKnovel
+
+ max_possible_nKbase = self.dataset.num_cats_base
+ nKbase = nKbase if nKbase >= 0 else max_possible_nKbase
+ if self.phase=='train' and nKbase > 0:
+ nKbase -= self.nKnovel
+ max_possible_nKbase -= self.nKnovel
+
+ assert(nKbase >= 0 and nKbase <= max_possible_nKbase)
+ self.nKbase = nKbase
+
+ self.nExemplars = nExemplars
+ self.nTestNovel = nTestNovel
+ self.nTestBase = nTestBase
+ self.batch_size = batch_size
+ self.epoch_size = epoch_size
+ self.num_workers = num_workers
+ self.is_eval_mode = (self.phase=='test') or (self.phase=='val')
+
+ def sampleImageIdsFrom(self, cat_id, sample_size=1):
+ """
+ Samples `sample_size` number of unique image ids picked from the
+ category `cat_id` (i.e., self.dataset.label2ind[cat_id]).
+
+ Args:
+ cat_id: a scalar with the id of the category from which images will
+ be sampled.
+ sample_size: number of images that will be sampled.
+
+ Returns:
+ image_ids: a list of length `sample_size` with unique image ids.
+ """
+ assert(cat_id in self.dataset.label2ind)
+ assert(len(self.dataset.label2ind[cat_id]) >= sample_size)
+ # Note: random.sample samples elements without replacement.
+ return random.sample(self.dataset.label2ind[cat_id], sample_size)
+
+ def sampleCategories(self, cat_set, sample_size=1):
+ """
+ Samples `sample_size` number of unique categories picked from the
+ `cat_set` set of categories. `cat_set` can be either 'base' or 'novel'.
+
+ Args:
+ cat_set: string that specifies the set of categories from which
+ categories will be sampled.
+ sample_size: number of categories that will be sampled.
+
+ Returns:
+ cat_ids: a list of length `sample_size` with unique category ids.
+ """
+ if cat_set=='base':
+ labelIds = self.dataset.labelIds_base
+ elif cat_set=='novel':
+ labelIds = self.dataset.labelIds_novel
+ else:
+ raise ValueError('Not recognized category set {}'.format(cat_set))
+
+ assert(len(labelIds) >= sample_size)
+ # return sample_size unique categories chosen from labelIds set of
+ # categories (that can be either self.labelIds_base or self.labelIds_novel)
+ # Note: random.sample samples elements without replacement.
+ return random.sample(labelIds, sample_size)
+
+ def sample_base_and_novel_categories(self, nKbase, nKnovel):
+ """
+ Samples `nKbase` number of base categories and `nKnovel` number of novel
+ categories.
+
+ Args:
+ nKbase: number of base categories
+ nKnovel: number of novel categories
+
+ Returns:
+ Kbase: a list of length 'nKbase' with the ids of the sampled base
+ categories.
+ Knovel: a list of lenght 'nKnovel' with the ids of the sampled novel
+ categories.
+ """
+ if self.is_eval_mode:
+ assert(nKnovel <= self.dataset.num_cats_novel)
+ # sample from the set of base categories 'nKbase' number of base
+ # categories.
+ Kbase = sorted(self.sampleCategories('base', nKbase))
+ # sample from the set of novel categories 'nKnovel' number of novel
+ # categories.
+ Knovel = sorted(self.sampleCategories('novel', nKnovel))
+ else:
+ # sample from the set of base categories 'nKnovel' + 'nKbase' number
+ # of categories.
+ cats_ids = self.sampleCategories('base', nKnovel+nKbase)
+ assert(len(cats_ids) == (nKnovel+nKbase))
+ # Randomly pick 'nKnovel' number of fake novel categories and keep
+ # the rest as base categories.
+ random.shuffle(cats_ids)
+ Knovel = sorted(cats_ids[:nKnovel])
+ Kbase = sorted(cats_ids[nKnovel:])
+
+ return Kbase, Knovel
+
+ def sample_test_examples_for_base_categories(self, Kbase, nTestBase):
+ """
+ Sample `nTestBase` number of images from the `Kbase` categories.
+
+ Args:
+ Kbase: a list of length `nKbase` with the ids of the categories from
+ where the images will be sampled.
+ nTestBase: the total number of images that will be sampled.
+
+ Returns:
+ Tbase: a list of length `nTestBase` with 2-element tuples. The 1st
+ element of each tuple is the image id that was sampled and the
+ 2nd elemend is its category label (which is in the range
+ [0, len(Kbase)-1]).
+ """
+ Tbase = []
+ if len(Kbase) > 0:
+ # Sample for each base category a number images such that the total
+ # number sampled images of all categories to be equal to `nTestBase`.
+ KbaseIndices = np.random.choice(
+ np.arange(len(Kbase)), size=nTestBase, replace=True)
+ KbaseIndices, NumImagesPerCategory = np.unique(
+ KbaseIndices, return_counts=True)
+
+ for Kbase_idx, NumImages in zip(KbaseIndices, NumImagesPerCategory):
+ imd_ids = self.sampleImageIdsFrom(
+ Kbase[Kbase_idx], sample_size=NumImages)
+ Tbase += [(img_id, Kbase_idx) for img_id in imd_ids]
+
+ assert(len(Tbase) == nTestBase)
+
+ return Tbase
+
+ def sample_train_and_test_examples_for_novel_categories(
+ self, Knovel, nTestNovel, nExemplars, nKbase):
+ """Samples train and test examples of the novel categories.
+
+ Args:
+ Knovel: a list with the ids of the novel categories.
+ nTestNovel: the total number of test images that will be sampled
+ from all the novel categories.
+ nExemplars: the number of training examples per novel category that
+ will be sampled.
+ nKbase: the number of base categories. It is used as offset of the
+ category index of each sampled image.
+
+ Returns:
+ Tnovel: a list of length `nTestNovel` with 2-element tuples. The
+ 1st element of each tuple is the image id that was sampled and
+ the 2nd element is its category label (which is in the range
+ [nKbase, nKbase + len(Knovel) - 1]).
+ Exemplars: a list of length len(Knovel) * nExemplars of 2-element
+ tuples. The 1st element of each tuple is the image id that was
+ sampled and the 2nd element is its category label (which is in
+ the ragne [nKbase, nKbase + len(Knovel) - 1]).
+ """
+
+ if len(Knovel) == 0:
+ return [], []
+
+ nKnovel = len(Knovel)
+ Tnovel = []
+ Exemplars = []
+ assert((nTestNovel % nKnovel) == 0)
+ nEvalExamplesPerClass = int(nTestNovel / nKnovel)
+
+ for Knovel_idx in range(len(Knovel)):
+ imd_ids = self.sampleImageIdsFrom(
+ Knovel[Knovel_idx],
+ sample_size=(nEvalExamplesPerClass + nExemplars))
+
+ imds_tnovel = imd_ids[:nEvalExamplesPerClass]
+ imds_ememplars = imd_ids[nEvalExamplesPerClass:]
+
+ Tnovel += [(img_id, nKbase+Knovel_idx) for img_id in imds_tnovel]
+ Exemplars += [(img_id, nKbase+Knovel_idx) for img_id in imds_ememplars]
+ assert(len(Tnovel) == nTestNovel)
+ assert(len(Exemplars) == len(Knovel) * nExemplars)
+ random.shuffle(Exemplars)
+
+ return Tnovel, Exemplars
+
+ def sample_episode(self):
+ """Samples a training episode."""
+ nKnovel = self.nKnovel
+ nKbase = self.nKbase
+ nTestNovel = self.nTestNovel
+ nTestBase = self.nTestBase
+ nExemplars = self.nExemplars
+
+ Kbase, Knovel = self.sample_base_and_novel_categories(nKbase, nKnovel)
+ Tbase = self.sample_test_examples_for_base_categories(Kbase, nTestBase)
+ Tnovel, Exemplars = self.sample_train_and_test_examples_for_novel_categories(
+ Knovel, nTestNovel, nExemplars, nKbase)
+
+ # concatenate the base and novel category examples.
+ Test = Tbase + Tnovel
+ random.shuffle(Test)
+ Kall = Kbase + Knovel
+
+ return Exemplars, Test, Kall, nKbase
+
+ def createExamplesTensorData(self, examples):
+ """
+ Creates the examples image and label tensor data.
+
+ Args:
+ examples: a list of 2-element tuples, each representing a
+ train or test example. The 1st element of each tuple
+ is the image id of the example and 2nd element is the
+ category label of the example, which is in the range
+ [0, nK - 1], where nK is the total number of categories
+ (both novel and base).
+
+ Returns:
+ images: a tensor of shape [nExamples, Height, Width, 3] with the
+ example images, where nExamples is the number of examples
+ (i.e., nExamples = len(examples)).
+ labels: a tensor of shape [nExamples] with the category label
+ of each example.
+ """
+ images = torch.stack(
+ [self.dataset[img_idx][0] for img_idx, _ in examples], dim=0)
+ labels = torch.LongTensor([label for _, label in examples])
+ return images, labels
+
+ def get_iterator(self, epoch=0):
+ rand_seed = epoch
+ random.seed(rand_seed)
+ np.random.seed(rand_seed)
+ def load_function(iter_idx):
+ Exemplars, Test, Kall, nKbase = self.sample_episode()
+ Xt, Yt = self.createExamplesTensorData(Test)
+ Kall = torch.LongTensor(Kall)
+ if len(Exemplars) > 0:
+ Xe, Ye = self.createExamplesTensorData(Exemplars)
+ return Xe, Ye, Xt, Yt, Kall, nKbase
+ else:
+ return Xt, Yt, Kall, nKbase
+
+ tnt_dataset = tnt.dataset.ListDataset(
+ elem_list=range(self.epoch_size), load=load_function)
+ data_loader = tnt_dataset.parallel(
+ batch_size=self.batch_size,
+ num_workers=(0 if self.is_eval_mode else self.num_workers),
+ shuffle=(False if self.is_eval_mode else True))
+
+ return data_loader
+
+ def __call__(self, epoch=0):
+ return self.get_iterator(epoch)
+
+ def __len__(self):
+ return (self.epoch_size / self.batch_size)
diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/main.py b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/main.py
new file mode 100644
index 0000000000000000000000000000000000000000..0f043929ad8545f48ae60c00207c261415b31520
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/main.py
@@ -0,0 +1,822 @@
+# -*- coding: utf-8 -*-
+#
+# BSD 3-Clause License
+#
+# Copyright (c) 2017 xxxx
+# All rights reserved.
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# * Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# * Neither the name of the copyright holder nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+# ============================================================================
+#
+import os
+import argparse
+import random
+import numpy as np
+from tqdm import tqdm
+import torch
+import torch.nn.functional as F
+from torch.utils.data import DataLoader
+from torch.autograd import Variable
+
+from models.resnet12_2 import resnet12
+from models.meta_part_inference_mini import ProtoComNet
+from models.PredTrainHead import LinearClassifier, LinearRotateHead
+
+from utils import set_gpu, Timer, count_accuracy, check_dir, log
+import pickle
+import torch.npu
+import os
+import time
+NPU_CALCULATE_DEVICE = 0
+if os.getenv('NPU_CALCULATE_DEVICE') and str.isdigit(os.getenv('NPU_CALCULATE_DEVICE')):
+ NPU_CALCULATE_DEVICE = int(os.getenv('NPU_CALCULATE_DEVICE'))
+if torch.npu.current_device() != NPU_CALCULATE_DEVICE:
+ torch.npu.set_device(f'npu:{NPU_CALCULATE_DEVICE}')
+
+def one_hot(indices, depth):
+ """
+ Returns a one-hot tensor.
+ This is a PyTorch equivalent of Tensorflow's tf.one_hot.
+
+ Parameters:
+ indices: a (n_batch, m) Tensor or (m) Tensor.
+ depth: a scalar. Represents the depth of the one hot dimension.
+ Returns: a (n_batch, m, depth) Tensor or (m, depth) Tensor.
+ """
+
+ encoded_indicies = torch.zeros(indices.size() + torch.Size([depth])).npu()
+ index = indices.view(indices.size()+torch.Size([1]))
+ encoded_indicies = encoded_indicies.scatter_(1,index,1)
+
+ return encoded_indicies
+
+def get_model(options):
+ # Choose the embedding network
+ if options.network == 'ResNet':
+ network = resnet12().npu()
+ network = torch.nn.DataParallel(network)
+ fea_dim = 512
+ else:
+ print ("Cannot recognize the network type")
+ assert(False)
+
+ propa_head = ProtoComNet(opt=options, in_dim=fea_dim).npu()
+ # Choose the classification head
+ if opt.use_trainval == 'True':
+ n_classes=80
+ else:
+ n_classes=64
+ if options.pre_head == 'LinearNet':
+ pre_head = LinearClassifier(in_dim=fea_dim, n_classes=n_classes).npu()
+ elif options.pre_head == 'LinearRotateNet':
+ pre_head = LinearRotateHead(in_dim=fea_dim, n_classes=n_classes).npu()
+ else:
+ print("Cannot recognize the dataset type")
+ assert (False)
+
+ if options.phase == 'pretrain':
+ from models.classification_heads_orgin import ClassificationHead
+ else:
+ from models.classification_heads import ClassificationHead
+ # Choose the classification head
+ if options.head == 'CosineNet':
+ cls_head = ClassificationHead(base_learner='CosineNet').npu()
+ elif options.head == 'FuseCosNet':
+ cls_head = ClassificationHead(base_learner='FuseCos').npu()
+ else:
+ print ("Cannot recognize the dataset type")
+ assert(False)
+
+ return (network, propa_head, pre_head, cls_head)
+
+def get_dataset(options):
+ # Choose the embedding network
+ if options.dataset == 'miniImageNet':
+ from data.mini_imagenet import MiniImageNet, FewShotDataloader, MiniImageNetPC
+ # dataset_trainval = MiniImageNet(phase='trainval')
+ if options.phase == 'savepart':
+ dataset_train = MiniImageNet(phase='train', do_not_use_random_transf=True)
+ elif options.phase == 'metainfer':
+ dataset_train = MiniImageNetPC(phase='train', shot=options.train_shot)
+ else:
+ dataset_train = MiniImageNet(phase='train')
+ dataset_val = MiniImageNet(phase='val')
+ dataset_test = MiniImageNet(phase='test')
+ data_loader = FewShotDataloader
+ else:
+ print ("Cannot recognize the dataset type")
+ assert(False)
+
+ return (dataset_train, dataset_val, dataset_test, data_loader)
+
+def seed_torch(seed=21):
+ os.environ['PYTHONHASHSEED'] = str(seed)
+ random.seed(seed)
+ np.random.seed(seed)
+ torch.manual_seed(seed)
+ torch.npu.manual_seed(seed)
+ torch.npu.manual_seed_all(seed)
+ torch.backends.cudnn.deterministic = True
+ torch.backends.cudnn.benchmark = False
+
+
+def pre_train(opt, dataset_train, dataset_val, dataset_test, data_loader):
+ data_loader_pre = torch.utils.data.DataLoader
+ # Dataloader of Gidaris & Komodakis (CVPR 2018)
+
+ if opt.use_trainval == 'True':
+ train_way = 80
+ dloader_train = data_loader_pre(
+ dataset=dataset_trainval,
+ batch_size=128,
+ shuffle=True,
+ num_workers=4
+ )
+ else:
+ train_way = 64
+ dloader_train = data_loader_pre(
+ dataset=dataset_train,
+ batch_size=128,
+ shuffle=True,
+ num_workers=4
+ )
+ dloader_val = data_loader(
+ dataset=dataset_val,
+ nKnovel=opt.test_way,
+ nKbase=0,
+ nExemplars=opt.val_shot, # num training examples per novel category
+ nTestNovel=opt.val_query * opt.test_way, # num test examples for all the novel categories
+ nTestBase=0, # num test examples for all the base categories
+ batch_size=1,
+ num_workers=0,
+ epoch_size=1 * opt.val_episode, # num of batches per epoch
+ )
+
+ set_gpu(opt.gpu)
+ check_dir('./experiments/')
+ check_dir(opt.save_path)
+
+ log_file_path = os.path.join(opt.save_path, "train_log.txt")
+ log(log_file_path, str(vars(opt)))
+
+ (embedding_net, propa_head, pre_head, cls_head) = get_model(opt)
+
+ print(list(dict(propa_head.named_parameters()).keys()))
+ optimizer = torch.optim.SGD([{'params': embedding_net.parameters()},
+ {'params': pre_head.parameters()}], lr=0.1, momentum=0.9, \
+ weight_decay=5e-4, nesterov=True)
+
+ lambda_epoch = lambda e: 1.0 if e < 60 else (0.1 if e < 80 else 0.01 if e < 90 else (0.001))
+ lr_scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_epoch, last_epoch=-1)
+
+ max_val_acc = 0.0
+ max_test_acc = 0.0
+
+ timer = Timer()
+ x_entropy = torch.nn.CrossEntropyLoss()
+
+ for epoch in range(1, opt.num_epoch + 1):
+ # Train on the training split
+ lr_scheduler.step()
+
+ # Fetch the current epoch's learning rate
+ epoch_learning_rate = 0.1
+ for param_group in optimizer.param_groups:
+ epoch_learning_rate = param_group['lr']
+
+ log(log_file_path, 'Train Epoch: {}\tLearning Rate: {:.4f}'.format(
+ epoch, epoch_learning_rate))
+
+ _, _, _, _ = [x.train() for x in (embedding_net, propa_head, pre_head, cls_head)]
+
+ train_accuracies = []
+ train_losses = []
+
+ for i, batch in enumerate(tqdm(dloader_train), 1):
+ start_time = time.time()
+ data, labels = [x.npu() for x in batch]
+
+ if opt.pre_head == 'LinearNet' or opt.pre_head == 'CosineNet':
+ emb = embedding_net(data)
+ logit = pre_head(emb)
+ smoothed_one_hot = one_hot(labels.reshape(-1), train_way)
+ smoothed_one_hot = smoothed_one_hot * (1 - opt.eps) + (1 - smoothed_one_hot) * opt.eps / (train_way - 1)
+
+ log_prb = F.log_softmax(logit.reshape(-1, train_way), dim=1)
+ loss = -(smoothed_one_hot * log_prb).sum(dim=1)
+ loss = loss.mean()
+ acc = count_accuracy(logit.reshape(-1, train_way), labels.reshape(-1))
+ elif opt.pre_head == 'LinearRotateNet' or opt.pre_head == 'DistRotateNet':
+ x_ = []
+ y_ = []
+ a_ = []
+ for j in range(data.shape[0]):
+ x90 = data[j].transpose(2, 1).flip(1)
+ x180 = x90.transpose(2, 1).flip(1)
+ x270 = x180.transpose(2, 1).flip(1)
+ x_ += [data[j], x90, x180, x270]
+ y_ += [labels[j] for _ in range(4)]
+ a_ += [torch.tensor(0), torch.tensor(1), torch.tensor(2), torch.tensor(3)]
+
+ x_ = Variable(torch.stack(x_, 0)).npu()
+ y_ = Variable(torch.stack(y_, 0)).npu()
+ a_ = Variable(torch.stack(a_, 0)).npu()
+ emb = embedding_net(x_)
+ # print(emb.shape)
+ logit = pre_head(emb, use_cls=True)
+ logit_rotate = pre_head(emb, use_cls=False)
+ smoothed_one_hot = one_hot(y_.reshape(-1), train_way)
+ smoothed_one_hot = smoothed_one_hot * (1 - opt.eps) + (1 - smoothed_one_hot) * opt.eps / (train_way - 1)
+
+ log_prb = F.log_softmax(logit.reshape(-1, train_way), dim=1)
+ loss = -(smoothed_one_hot * log_prb).sum(dim=1)
+ loss = loss.mean()
+ rloss = F.cross_entropy(input=logit_rotate, target=a_)
+ loss = 0.5 * loss + 0.5 * rloss
+ acc = count_accuracy(logit.reshape(-1, train_way), y_.reshape(-1))
+ else:
+ print("Cannot recognize the pre_head type")
+ assert (False)
+
+
+ train_accuracies.append(acc.item())
+ train_losses.append(loss.item())
+ step_time = time.time() - start_time
+ if (i % 10 == 0):
+ train_acc_avg = np.mean(np.array(train_accuracies))
+ log(log_file_path, 'Train Epoch: {}\tBatch: [{}]\tLoss: {:.3f}\tAccuracy: {:.3f} % ({:.3f} %) \ttime/step(s):{:.4f}'.format(
+ epoch, i, loss.item(), train_acc_avg, acc,step_time))
+
+ optimizer.zero_grad()
+ loss.backward()
+ optimizer.step()
+
+ # Evaluate on the validation split
+ _, _, _, _ = [x.eval() for x in (embedding_net, propa_head, pre_head, cls_head)]
+
+ val_accuracies = []
+ val_losses = []
+
+ for i, batch in enumerate(tqdm(dloader_val(opt.seed)), 1):
+ data_support, labels_support, \
+ data_query, labels_query, _, _ = [
+ x.npu() for x in batch]
+
+ test_n_support = opt.test_way * opt.val_shot
+ test_n_query = opt.test_way * opt.val_query
+
+ emb_support = embedding_net(data_support.reshape([-1] + list(data_support.shape[-3:])))
+ emb_support = emb_support.reshape(1, test_n_support, -1)
+
+ emb_query = embedding_net(data_query.reshape([-1] + list(data_query.shape[-3:])))
+ emb_query = emb_query.reshape(1, test_n_query, -1)
+
+ logit_query = cls_head(emb_query, emb_support, labels_support, opt.test_way, opt.val_shot)
+
+ loss = x_entropy(logit_query.reshape(-1, opt.test_way), labels_query.reshape(-1))
+ acc = count_accuracy(logit_query.reshape(-1, opt.test_way), labels_query.reshape(-1))
+
+ val_accuracies.append(acc.item())
+ val_losses.append(loss.item())
+
+ val_acc_avg = np.mean(np.array(val_accuracies))
+ val_acc_ci95 = 1.96 * np.std(np.array(val_accuracies)) / np.sqrt(opt.val_episode)
+
+ val_loss_avg = np.mean(np.array(val_losses))
+
+ if val_acc_avg > max_val_acc:
+ max_val_acc = val_acc_avg
+ torch.save({'embedding': embedding_net.state_dict(), 'propa_head': propa_head.state_dict(),
+ 'pre_head': pre_head.state_dict(), 'cls_head': cls_head.state_dict()}, \
+ os.path.join(opt.save_path, 'best_pretrain_model.pth'))
+ log(log_file_path, 'Validation Epoch: {}\t\t\tLoss: {:.4f}\tAccuracy: {:.2f} +- {:.2f} % (Best)' \
+ .format(epoch, val_loss_avg, val_acc_avg, val_acc_ci95))
+ else:
+ log(log_file_path, 'Validation Epoch: {}\t\t\tLoss: {:.4f}\tAccuracy: {:.2f} +- {:.2f} %' \
+ .format(epoch, val_loss_avg, val_acc_avg, val_acc_ci95))
+
+ torch.save({'embedding': embedding_net.state_dict(), 'propa_head': propa_head.state_dict(),
+ 'pre_head': pre_head.state_dict(), 'cls_head': cls_head.state_dict()} \
+ , os.path.join(opt.save_path, 'last_pretrain_epoch.pth'))
+
+ if epoch % opt.save_epoch == 0:
+ torch.save({'embedding': embedding_net.state_dict(), 'propa_head': propa_head.state_dict(),
+ 'pre_head': pre_head.state_dict(), 'cls_head': cls_head.state_dict()} \
+ , os.path.join(opt.save_path, 'epoch_{}_pretrain.pth'.format(epoch)))
+
+def part_prototype(opt, dataset_train, dataset_val, dataset_test, data_loader):
+ data_loader_pre = torch.utils.data.DataLoader
+ # Dataloader of Gidaris & Komodakis (CVPR 2018)
+ dloader_train = data_loader_pre(
+ dataset=dataset_train,
+ batch_size=1,
+ shuffle=False,
+ num_workers=0
+ )
+
+ set_gpu(opt.gpu)
+ check_dir('./experiments/')
+ check_dir(opt.save_path)
+
+ log_file_path = os.path.join(opt.save_path, "train_log.txt")
+ log(log_file_path, str(vars(opt)))
+
+ (embedding_net, propa_head, pre_head, cls_head) = get_model(opt)
+
+ # Load saved model checkpoints
+ saved_models = torch.load(os.path.join(opt.save_path, 'best_pretrain_model.pth'))
+ embedding_net.load_state_dict(saved_models['embedding'])
+ embedding_net.eval()
+
+ embs = []
+ for i, batch in enumerate(tqdm(dloader_train), 1):
+ data, labels = [x.npu() for x in batch]
+ with torch.no_grad():
+ emb = embedding_net(data)
+ embs.append(emb)
+ embs = torch.cat(embs, dim=0)
+
+ with open('./data/mini_imagenet_part_prior_train.pickle', 'rb') as handle:
+ part_prior = pickle.load(handle)
+ train_class_name_file = './data/mini_imagenet_catname2label_train.pickle'
+ with open(train_class_name_file, 'rb') as handle:
+ catname2label_train = pickle.load(handle)
+
+ a = 1
+ attr_feature = {}
+ for attr_id in part_prior['attribute_id_class_dict'].keys():
+ if attr_id not in [part_prior['wnids2id'][wnid] for wnid in part_prior['all_wnids']]:
+ attr_im_id = []
+ for sel_class_id in list(set(part_prior['attribute_id_class_dict'][attr_id])):
+ if sel_class_id in [part_prior['wnids2id'][wnid] for wnid in part_prior['wnids_train']]:
+ sel_class = catname2label_train[part_prior['id2wnids'][sel_class_id]]
+ attr_im_id.extend(dataset_train.label2ind[sel_class])
+ attr_im = embs[attr_im_id, :]
+ mean = torch.mean(attr_im, dim=0).unsqueeze(dim=0)
+ std = torch.std(attr_im, dim=0).unsqueeze(dim=0)
+ attr_feature[attr_id] = {'mean': mean, 'std':std}
+
+ with open(os.path.join(opt.save_path, "mini_imagenet_metapart_feature.pickle"), 'wb') as handle:
+ pickle.dump(attr_feature, handle, protocol=pickle.HIGHEST_PROTOCOL)
+
+ class_feature = {}
+ for class_id in part_prior['class_attribute_id_dict'].keys():
+ if class_id in [part_prior['wnids2id'][wnid] for wnid in part_prior['wnids_train']]:
+ sel_class = catname2label_train[part_prior['id2wnids'][class_id]]
+ class_im = embs[dataset_train.label2ind[sel_class], :]
+ mean = torch.mean(class_im, dim=0).unsqueeze(dim=0)
+ std = torch.std(class_im, dim=0).unsqueeze(dim=0)
+ class_feature[sel_class] = {'mean': mean, 'std':std}
+
+ with open(os.path.join(opt.save_path, "mini_imagenet_class_feature.pickle"), 'wb') as handle:
+ pickle.dump(class_feature, handle, protocol=pickle.HIGHEST_PROTOCOL)
+
+def meta_inference(opt, dataset_train, dataset_val, dataset_test, data_loader):
+ data_loader_pre = torch.utils.data.DataLoader
+ # Dataloader of Gidaris & Komodakis (CVPR 2018)
+ dloader_train = data_loader_pre(
+ dataset=dataset_train,
+ batch_size=128,
+ shuffle=True,
+ num_workers=0
+ )
+
+ dloader_val = data_loader(
+ dataset=dataset_val,
+ nKnovel=opt.test_way,
+ nKbase=0,
+ nExemplars=opt.val_shot, # num training examples per novel category
+ nTestNovel=opt.val_query * opt.test_way, # num test examples for all the novel categories
+ nTestBase=0, # num test examples for all the base categories
+ batch_size=1,
+ num_workers=0,
+ epoch_size=1 * opt.val_episode, # num of batches per epoch
+ )
+
+ set_gpu(opt.gpu)
+ check_dir('./experiments/')
+ check_dir(opt.save_path)
+
+ log_file_path = os.path.join(opt.save_path, "train_log.txt")
+ log(log_file_path, str(vars(opt)))
+
+ (embedding_net, propa_head, pre_head, cls_head) = get_model(opt)
+
+ # Load saved model checkpoints
+ saved_models = torch.load(os.path.join(opt.save_path, 'best_pretrain_model.pth'))
+ embedding_net.load_state_dict(saved_models['embedding'])
+ embedding_net.eval()
+ cls_head.eval()
+
+ optimizer = torch.optim.SGD([{'params': propa_head.parameters()}], lr=0.1, momentum=0.9, \
+ weight_decay=5e-4, nesterov=True)
+
+ lambda_epoch = lambda e: 1.0 if e < 15 else (0.1 if e < 40 else 0.01 if e < 80 else (0.001))
+ lr_scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_epoch, last_epoch=-1)
+
+ train_losses = []
+ x_entropy = torch.nn.CrossEntropyLoss()
+ max_loss = 10e16
+ max_val_acc = 0
+ max_test_acc = 0
+ for epoch in range(0, opt.num_epoch + 1):
+ # Train on the training split
+ lr_scheduler.step()
+
+ # Fetch the current epoch's learning rate
+ epoch_learning_rate = 0.1
+ for param_group in optimizer.param_groups:
+ epoch_learning_rate = param_group['lr']
+
+ log(log_file_path, 'Train Epoch: {}\tLearning Rate: {:.4f}'.format(
+ epoch, epoch_learning_rate))
+
+ propa_head.train()
+ train_accuracies = []
+ for i, batch in enumerate(tqdm(dloader_train), 1):
+ data, labels = [x.npu() for x in batch]
+ nb, ns, nc, nw, nh = data.shape
+ with torch.no_grad():
+ data = data.reshape(nb*ns, nc, nw, nh)
+ emb = embedding_net(data)
+ emb = emb.reshape(nb, ns, -1)
+ emb = emb.mean(dim=1)
+ proto, proto_true = propa_head(emb, labels)
+ loss = F.mse_loss(proto, proto_true)
+
+ optimizer.zero_grad()
+ loss.backward()
+ optimizer.step()
+ train_losses.append(loss.item())
+ if (i % 10 == 0):
+ train_loss_avg = np.mean(np.array(train_losses))
+ log(log_file_path, 'Train Epoch: {}\tBatch: [{}]\tLoss: {}({})'.format(
+ epoch, i, loss.item(), train_loss_avg))
+
+ # Evaluate on the validation split
+ _, _, _, _ = [x.eval() for x in (embedding_net, propa_head, pre_head, cls_head)]
+
+ val_accuracies = []
+ val_losses = []
+
+ for i, batch in enumerate(tqdm(dloader_val(opt.seed)), 1):
+ data_support, labels_support, \
+ data_query, labels_query, k_all, _ = [
+ x.npu() for x in batch]
+
+ test_n_support = opt.test_way * opt.val_shot
+ test_n_query = opt.test_way * opt.val_query
+
+ with torch.no_grad():
+ emb_support = embedding_net(data_support.reshape([-1] + list(data_support.shape[-3:])))
+ emb_support = emb_support.reshape(1, test_n_support, -1)
+
+ emb_query = embedding_net(data_query.reshape([-1] + list(data_query.shape[-3:])))
+ emb_query = emb_query.reshape(1, test_n_query, -1)
+
+ logit_query = cls_head(k_all, propa_head, emb_query, emb_support, labels_support, opt.test_way, opt.val_shot, is_scale=True)
+
+ loss = x_entropy(logit_query.reshape(-1, opt.test_way), labels_query.reshape(-1))
+ acc = count_accuracy(logit_query.reshape(-1, opt.test_way), labels_query.reshape(-1))
+
+ val_accuracies.append(acc.item())
+ val_losses.append(loss.item())
+
+ val_acc_avg = np.mean(np.array(val_accuracies))
+ val_acc_ci95 = 1.96 * np.std(np.array(val_accuracies)) / np.sqrt(opt.val_episode)
+
+ val_loss_avg = np.mean(np.array(val_losses))
+
+ if val_acc_avg > max_val_acc:
+ max_val_acc = val_acc_avg
+ torch.save({'embedding': embedding_net.state_dict(), 'propa_head': propa_head.state_dict(),
+ 'pre_head': pre_head.state_dict(), 'cls_head': cls_head.state_dict()}, \
+ os.path.join(opt.save_path, 'best_pretrain_model_meta_infer_val_{}w_{}s_{}.pth'.format(opt.test_way, opt.val_shot, opt.head)))
+ log(log_file_path, 'Validation Epoch: {}\t\t\tLoss: {:.4f}\tAccuracy: {:.2f} 卤 {:.2f} % (Best)' \
+ .format(epoch, val_loss_avg, val_acc_avg, val_acc_ci95))
+ else:
+ log(log_file_path, 'Validation Epoch: {}\t\t\tLoss: {:.4f}\tAccuracy: {:.2f} 卤 {:.2f} %' \
+ .format(epoch, val_loss_avg, val_acc_avg, val_acc_ci95))
+
+ torch.save({'embedding': embedding_net.state_dict(), 'propa_head': propa_head.state_dict(),
+ 'pre_head': pre_head.state_dict(), 'cls_head': cls_head.state_dict()} \
+ , os.path.join(opt.save_path, 'last_pretrain_epoch_meta_infer.pth'))
+
+ if epoch % opt.save_epoch == 0:
+ torch.save({'embedding': embedding_net.state_dict(), 'propa_head': propa_head.state_dict(),
+ 'pre_head': pre_head.state_dict(), 'cls_head': cls_head.state_dict()} \
+ , os.path.join(opt.save_path, 'epoch_{}_pretrain_meta_infer.pth'.format(epoch)))
+
+def meta_train(opt, dataset_train, dataset_val, dataset_test, data_loader):
+ # Dataloader of Gidaris & Komodakis (CVPR 2018)
+ dloader_train = data_loader(
+ dataset=dataset_train,
+ nKnovel=opt.train_way,
+ nKbase=0,
+ nExemplars=opt.train_shot, # num training examples per novel category
+ nTestNovel=opt.train_way * opt.train_query, # num test examples for all the novel categories
+ nTestBase=0, # num test examples for all the base categories
+ batch_size=opt.episodes_per_batch,
+ num_workers=4,
+ epoch_size=opt.episodes_per_batch * 100, # num of batches per epoch
+ )
+
+ dloader_val = data_loader(
+ dataset=dataset_val,
+ nKnovel=opt.test_way,
+ nKbase=0,
+ nExemplars=opt.val_shot, # num training examples per novel category
+ nTestNovel=opt.val_query * opt.test_way, # num test examples for all the novel categories
+ nTestBase=0, # num test examples for all the base categories
+ batch_size=1,
+ num_workers=0,
+ epoch_size=1 * opt.val_episode, # num of batches per epoch
+ )
+
+ set_gpu(opt.gpu)
+ check_dir('./experiments/')
+ check_dir(opt.save_path)
+
+ log_file_path = os.path.join(opt.save_path, "train_log.txt")
+ log(log_file_path, str(vars(opt)))
+
+ (embedding_net, propa_head, pre_head, cls_head) = get_model(opt)
+
+ # Load saved model checkpoints
+ saved_models = torch.load(os.path.join(opt.save_path, 'best_pretrain_model_meta_infer_val_{}w_{}s_{}.pth'.format(opt.test_way, opt.val_shot, opt.head)))
+ embedding_net.load_state_dict(saved_models['embedding'])
+ embedding_net.eval()
+ propa_head.load_state_dict(saved_models['propa_head'])
+ propa_head.eval()
+
+ optimizer = torch.optim.SGD([{'params': embedding_net.parameters()},
+ {'params': propa_head.parameters()},
+ {'params': cls_head.parameters()}], lr=0.0001, momentum=0.9, \
+ weight_decay=5e-4, nesterov=True)
+
+ lambda_epoch = lambda e: 1.0 if e < 15 else (0.1 if e < 25 else 0.01 if e < 30 else (0.001))
+ lr_scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_epoch, last_epoch=-1)
+
+ max_val_acc = 0.0
+ max_test_acc = 0.0
+
+ timer = Timer()
+ x_entropy = torch.nn.CrossEntropyLoss()
+
+ for epoch in range(0, opt.num_epoch + 1):
+ if epoch != 0:
+ # Train on the training split
+ lr_scheduler.step()
+
+ # Fetch the current epoch's learning rate
+ epoch_learning_rate = 0.1
+ for param_group in optimizer.param_groups:
+ epoch_learning_rate = param_group['lr']
+
+ log(log_file_path, 'Train Epoch: {}\tLearning Rate: {:.4f}'.format(
+ epoch, epoch_learning_rate))
+
+ _, _, _ = [x.train() for x in (embedding_net, propa_head, cls_head)]
+
+ train_accuracies = []
+ train_losses = []
+
+ for i, batch in enumerate(tqdm(dloader_train(epoch)), 1):
+ data_support, labels_support, \
+ data_query, labels_query, k_all, _ = [
+ x.npu() for x in batch]
+
+ train_n_support = opt.train_way * opt.train_shot
+ train_n_query = opt.train_way * opt.train_query
+
+ emb_support = embedding_net(data_support.reshape([-1] + list(data_support.shape[-3:])))
+ emb_support = emb_support.reshape(opt.episodes_per_batch, train_n_support, -1)
+
+ emb_query = embedding_net(data_query.reshape([-1] + list(data_query.shape[-3:])))
+ emb_query = emb_query.reshape(opt.episodes_per_batch, train_n_query, -1)
+
+ logit_query = cls_head(k_all, propa_head, emb_query, emb_support, labels_support, opt.train_way, opt.train_shot, is_scale=False)
+
+ smoothed_one_hot = one_hot(labels_query.reshape(-1), opt.train_way)
+ smoothed_one_hot = smoothed_one_hot * (1 - opt.eps) + (1 - smoothed_one_hot) * opt.eps / (opt.train_way - 1)
+
+ log_prb = F.log_softmax(logit_query.reshape(-1, opt.train_way), dim=1)
+ loss = -(smoothed_one_hot * log_prb).sum(dim=1)
+ loss = loss.mean()
+
+ acc = count_accuracy(logit_query.reshape(-1, opt.train_way), labels_query.reshape(-1))
+
+ train_accuracies.append(acc.item())
+ train_losses.append(loss.item())
+
+ if (i % 10 == 0):
+ train_acc_avg = np.mean(np.array(train_accuracies))
+ log(log_file_path, 'Train Epoch: {}\tBatch: [{}]\tLoss: {}\tAccuracy: {} % ({} %)'.format(
+ epoch, i, loss.item(), train_acc_avg, acc))
+
+ optimizer.zero_grad()
+ loss.backward()
+ optimizer.step()
+
+ # Evaluate on the validation split
+ _, _, _ = [x.eval() for x in (embedding_net, propa_head, cls_head)]
+
+ val_accuracies = []
+ val_losses = []
+
+ for i, batch in enumerate(tqdm(dloader_val(opt.seed)), 1):
+ data_support, labels_support, \
+ data_query, labels_query, k_all, _ = [
+ x.npu() for x in batch]
+
+ test_n_support = opt.test_way * opt.val_shot
+ test_n_query = opt.test_way * opt.val_query
+
+ with torch.no_grad():
+ emb_support = embedding_net(data_support.reshape([-1] + list(data_support.shape[-3:])))
+ emb_support = emb_support.reshape(1, test_n_support, -1)
+
+ emb_query = embedding_net(data_query.reshape([-1] + list(data_query.shape[-3:])))
+ emb_query = emb_query.reshape(1, test_n_query, -1)
+
+ logit_query = cls_head(k_all, propa_head, emb_query, emb_support, labels_support, opt.test_way, opt.val_shot, is_scale=True)
+
+ loss = x_entropy(logit_query.reshape(-1, opt.test_way), labels_query.reshape(-1))
+ acc = count_accuracy(logit_query.reshape(-1, opt.test_way), labels_query.reshape(-1))
+
+ val_accuracies.append(acc.item())
+ val_losses.append(loss.item())
+
+ val_acc_avg = np.mean(np.array(val_accuracies))
+ val_acc_ci95 = 1.96 * np.std(np.array(val_accuracies)) / np.sqrt(opt.val_episode)
+
+ val_loss_avg = np.mean(np.array(val_losses))
+
+ if val_acc_avg > max_val_acc:
+ max_val_acc = val_acc_avg
+ torch.save({'embedding': embedding_net.state_dict(), 'propa_head': propa_head.state_dict(),
+ 'pre_head': pre_head.state_dict(), 'cls_head': cls_head.state_dict()}, \
+ os.path.join(opt.save_path, 'best_model_meta_val_{}w_{}s_{}.pth'.format(opt.test_way, opt.val_shot, opt.head)))
+ log(log_file_path, 'Validation Epoch: {}\t\t\tLoss: {:.4f}\tAccuracy: {:.2f} 卤 {:.2f} % (Best)' \
+ .format(epoch, val_loss_avg, val_acc_avg, val_acc_ci95))
+ else:
+ log(log_file_path, 'Validation Epoch: {}\t\t\tLoss: {:.4f}\tAccuracy: {:.2f} 卤 {:.2f} %' \
+ .format(epoch, val_loss_avg, val_acc_avg, val_acc_ci95))
+
+
+def meta_test(opt, dataset_train, dataset_val, dataset_test, data_loader):
+ # Dataloader of Gidaris & Komodakis (CVPR 2018)
+ dloader_test = data_loader(
+ dataset=dataset_test,
+ nKnovel=opt.test_way,
+ nKbase=0,
+ nExemplars=opt.val_shot, # num training examples per novel category
+ nTestNovel=opt.val_query * opt.test_way, # num test examples for all the novel categories
+ nTestBase=0, # num test examples for all the base categories
+ batch_size=1,
+ num_workers=0,
+ epoch_size=1 * opt.val_episode, # num of batches per epoch
+ )
+
+ set_gpu(opt.gpu)
+ check_dir('./experiments/')
+ check_dir(opt.save_path)
+
+ log_file_path = os.path.join(opt.save_path, "train_log.txt")
+ log(log_file_path, str(vars(opt)))
+
+ (embedding_net, propa_head, pre_head, cls_head) = get_model(opt)
+
+ # Load saved model checkpoints
+ saved_models = torch.load(os.path.join(opt.save_path, 'best_model_meta_val_{}w_{}s_{}.pth'.format(opt.test_way, opt.val_shot, opt.head)))
+ embedding_net.load_state_dict(saved_models['embedding'])
+ embedding_net.eval()
+ propa_head.load_state_dict(saved_models['propa_head'])
+ propa_head.eval()
+
+ max_val_acc = 0.0
+ max_test_acc = 0.0
+
+ timer = Timer()
+ x_entropy = torch.nn.CrossEntropyLoss()
+
+ # Evaluate on the validation split
+ _, _, _ = [x.eval() for x in (embedding_net, propa_head, cls_head)]
+ test_accuracies = []
+ test_losses = []
+
+ for i, batch in enumerate(tqdm(dloader_test(opt.seed)), 1):
+ data_support, labels_support, \
+ data_query, labels_query, k_all, _ = [
+ x.npu() for x in batch]
+
+ test_n_support = opt.test_way * opt.val_shot
+ test_n_query = opt.test_way * opt.val_query
+
+ with torch.no_grad():
+ emb_support = embedding_net(data_support.reshape([-1] + list(data_support.shape[-3:])))
+ emb_support = emb_support.reshape(1, test_n_support, -1)
+
+ emb_query = embedding_net(data_query.reshape([-1] + list(data_query.shape[-3:])))
+ emb_query = emb_query.reshape(1, test_n_query, -1)
+
+ logit_query = cls_head(k_all, propa_head, emb_query, emb_support, labels_support, opt.test_way, opt.val_shot, is_scale=True)
+
+ loss = x_entropy(logit_query.reshape(-1, opt.test_way), labels_query.reshape(-1))
+ acc = count_accuracy(logit_query.reshape(-1, opt.test_way), labels_query.reshape(-1))
+
+ test_accuracies.append(acc.item())
+ test_losses.append(loss.item())
+
+ test_acc_avg = np.mean(np.array(test_accuracies))
+ test_acc_ci95 = 1.96 * np.std(np.array(test_accuracies)) / np.sqrt(opt.val_episode)
+
+ test_loss_avg = np.mean(np.array(test_losses))
+
+ log(log_file_path, 'Test Loss: {:.4f}\tAccuracy: {:.2f} 卤 {:.2f} % (Best)' \
+ .format(test_loss_avg, test_acc_avg, test_acc_ci95))
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--num-epoch', type=int, default=100,
+ help='number of training epochs')
+ parser.add_argument('--save-epoch', type=int, default=10,
+ help='frequency of model saving')
+ parser.add_argument('--train-shot', type=int, default=1,
+ help='number of support examples per training class')
+ parser.add_argument('--val-shot', type=int, default=1,
+ help='number of support examples per validation class')
+ parser.add_argument('--train-query', type=int, default=15,
+ help='number of query examples per training class')
+ parser.add_argument('--val-episode', type=int, default=600,
+ help='number of episodes per validation')
+ parser.add_argument('--val-query', type=int, default=15,
+ help='number of query examples per validation class')
+ parser.add_argument('--train-way', type=int, default=5,
+ help='number of classes in one training episode')
+ parser.add_argument('--test-way', type=int, default=5,
+ help='number of classes in one test (or validation) episode')
+ parser.add_argument('--save-path', default='./experiments/meta_part_resnet12_mini')
+ parser.add_argument('--gpu', default='0')
+ parser.add_argument('--network', type=str, default='ResNet',
+ help='choose which embedding network to use. ProtoNet, R2D2, ResNet')
+ parser.add_argument('--head', type=str, default='FuseCosNet',
+ help='choose which classification head to use. FuseCosNet, CosineNet, ProtoNet, Ridge, R2D2, SVM')
+ parser.add_argument('--pre_head', type=str, default='LinearNet',
+ help='choose which classification head to use. ProtoNet, Ridge, R2D2, SVM')
+ parser.add_argument('--dataset', type=str, default='miniImageNet',
+ help='choose which classification head to use. miniImageNet, tieredImageNet, CIFAR_FS, FC100')
+ parser.add_argument('--episodes-per-batch', type=int, default=8,
+ help='number of episodes per batch')
+ parser.add_argument('--eps', type=float, default=0.0,
+ help='epsilon of label smoothing')
+ parser.add_argument('--phase', type=str, default='metatest',
+ help='metainfer, pretrain, savepart, metatrain, metatest')
+ parser.add_argument('--use_trainval', type=str, default='False',
+ help='frequency of model saving')
+ parser.add_argument('--seed', type=int, default=45,
+ help='number of episodes per batch')
+
+ opt = parser.parse_args()
+ seed_torch(opt.seed)
+
+ (dataset_train, dataset_val, dataset_test, data_loader) = get_dataset(opt)
+
+ if opt.phase == 'pretrain':
+ pre_train(opt, dataset_train, dataset_val, dataset_test, data_loader)
+ elif opt.phase == 'metatrain':
+ meta_train(opt, dataset_train, dataset_val, dataset_test, data_loader)
+ elif opt.phase == 'metatest':
+ meta_test(opt, dataset_train, dataset_val, dataset_test, data_loader)
+ elif opt.phase == 'savepart':
+ part_prototype(opt, dataset_train, dataset_val, dataset_test, data_loader)
+ else:
+ meta_inference(opt, dataset_train, dataset_val, dataset_test, data_loader)
+
+
+
+
diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/PredTrainHead.py b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/PredTrainHead.py
new file mode 100644
index 0000000000000000000000000000000000000000..2dcd605cc848f71f8e639d7bb461fb24e5c98548
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/PredTrainHead.py
@@ -0,0 +1,79 @@
+#
+# BSD 3-Clause License
+#
+# Copyright (c) 2017 xxxx
+# All rights reserved.
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# * Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# * Neither the name of the copyright holder nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+# ============================================================================
+#
+import torch.nn as nn
+import math
+import torch
+from torch.autograd import Variable
+import numpy as np
+import pickle
+import torch.nn.functional as F
+import torch.npu
+import os
+NPU_CALCULATE_DEVICE = 0
+if os.getenv('NPU_CALCULATE_DEVICE') and str.isdigit(os.getenv('NPU_CALCULATE_DEVICE')):
+ NPU_CALCULATE_DEVICE = int(os.getenv('NPU_CALCULATE_DEVICE'))
+if torch.npu.current_device() != NPU_CALCULATE_DEVICE:
+ torch.npu.set_device(f'npu:{NPU_CALCULATE_DEVICE}')
+
+class LinearClassifier(nn.Module):
+
+ def __init__(self, in_dim, n_classes):
+ super().__init__()
+ # self.dropout = nn.Dropout(p=0.2)
+ self.linear = nn.Linear(in_dim, n_classes, bias=True)
+
+ def forward(self, x):
+ # x = self.dropout(x)
+ return self.linear(x)
+
+class LinearRotateHead(nn.Module):
+ def __init__(self, in_dim=512, n_classes=100):
+ super(LinearRotateHead, self).__init__()
+
+ self.rotate_classifier = nn.Sequential(
+ nn.Linear(in_dim, 4)
+ )
+ self.cls_classifier = nn.Sequential(
+ nn.Linear(in_dim, n_classes)
+ )
+
+
+ def forward(self, x, use_cls=True):
+ if use_cls:
+ out = self.cls_classifier(x)
+ else:
+ out = self.rotate_classifier(x)
+ return out
+
+
diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/__init__.py b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..79334ea308876dc3274bdce976691e9595798a37
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/__init__.py
@@ -0,0 +1,34 @@
+#
+# BSD 3-Clause License
+#
+# Copyright (c) 2017 xxxx
+# All rights reserved.
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# * Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# * Neither the name of the copyright holder nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+# ============================================================================
+#
+
diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/classification_heads.py b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/classification_heads.py
new file mode 100644
index 0000000000000000000000000000000000000000..bb577a8039bee500d5d862bf3ac30703383f33c7
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/classification_heads.py
@@ -0,0 +1,220 @@
+#
+# BSD 3-Clause License
+#
+# Copyright (c) 2017 xxxx
+# All rights reserved.
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# * Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# * Neither the name of the copyright holder nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+# ============================================================================
+#
+import os
+import sys
+
+import torch
+from torch.autograd import Variable
+import torch.nn as nn
+from torch.nn import functional as F
+import numpy as np
+import torch.npu
+import os
+NPU_CALCULATE_DEVICE = 0
+if os.getenv('NPU_CALCULATE_DEVICE') and str.isdigit(os.getenv('NPU_CALCULATE_DEVICE')):
+ NPU_CALCULATE_DEVICE = int(os.getenv('NPU_CALCULATE_DEVICE'))
+if torch.npu.current_device() != NPU_CALCULATE_DEVICE:
+ torch.npu.set_device(f'npu:{NPU_CALCULATE_DEVICE}')
+
+def one_hot(indices, depth):
+ """
+ Returns a one-hot tensor.
+ This is a PyTorch equivalent of Tensorflow's tf.one_hot.
+
+ Parameters:
+ indices: a (n_batch, m) Tensor or (m) Tensor.
+ depth: a scalar. Represents the depth of the one hot dimension.
+ Returns: a (n_batch, m, depth) Tensor or (m, depth) Tensor.
+ """
+
+ encoded_indicies = torch.zeros(indices.size() + torch.Size([depth])).npu()
+ index = indices.view(indices.size()+torch.Size([1]))
+ encoded_indicies = encoded_indicies.scatter_(1,index,1)
+
+ return encoded_indicies
+
+def CosineNetHead(k_all, meta_part_infer, query, support, support_labels, n_way, n_shot, is_scale=False, normalize=True):
+ """
+ Constructs the prototype representation of each class(=mean of support vectors of each class) and
+ returns the classification score (=L2 distance to each class prototype) on the query set.
+
+ This model is the classification head described in:
+ Prototypical Networks for Few-shot Learning
+ (Snell et al., NIPS 2017).
+
+ Parameters:
+ query: a (tasks_per_batch, n_query, d) Tensor.
+ support: a (tasks_per_batch, n_support, d) Tensor.
+ support_labels: a (tasks_per_batch, n_support) Tensor.
+ n_way: a scalar. Represents the number of classes in a few-shot classification task.
+ n_shot: a scalar. Represents the number of support examples given per class.
+ normalize: a boolean. Represents whether if we want to normalize the distances by the embedding dimension.
+ Returns: a (tasks_per_batch, n_query, n_way) Tensor.
+ """
+
+ tasks_per_batch = query.size(0)
+ n_support = support.size(1)
+ n_query = query.size(1)
+ d = query.size(2)
+
+ assert (query.dim() == 3)
+ assert (support.dim() == 3)
+ assert (query.size(0) == support.size(0) and query.size(2) == support.size(2))
+ assert (n_support == n_way * n_shot) # n_support must equal to n_way * n_shot
+
+ support_labels_one_hot = one_hot(support_labels.view(tasks_per_batch * n_support), n_way)
+ support_labels_one_hot = support_labels_one_hot.view(tasks_per_batch, n_support, n_way)
+
+ # From:
+ # https://github.com/gidariss/FewShotWithoutForgetting/blob/master/architectures/PrototypicalNetworksHead.py
+ # ************************* Compute Prototypes **************************
+ labels_train_transposed = support_labels_one_hot.transpose(1, 2)
+
+ prototypes = torch.bmm(labels_train_transposed, support)
+ # Divide with the number of examples per novel category.
+ prototypes = prototypes.div(
+ labels_train_transposed.sum(dim=2, keepdim=True).expand_as(prototypes)
+ )
+
+ boost_prototypes, _ = meta_part_infer(prototypes.reshape(-1, d), k_all.reshape(-1), is_infer=is_scale)
+ boost_prototypes = boost_prototypes.reshape(tasks_per_batch, n_way, d)
+ prototypes = prototypes * 0.5 + boost_prototypes * 0.5
+ # Distance Matrix Vectorization Trick
+ logits = torch.nn.functional.cosine_similarity(query.unsqueeze(2).expand(-1, -1, prototypes.shape[1], -1),
+ prototypes.unsqueeze(1).expand(-1, query.shape[1], -1, -1), dim=-1)
+
+ return logits
+
+
+def FuseCosineNetHead(k_all, meta_part_infer, query, support, support_labels, n_way, n_shot, is_scale=False, normalize=True):
+ """
+ Constructs the prototype representation of each class(=mean of support vectors of each class) and
+ returns the classification score (=L2 distance to each class prototype) on the query set.
+
+ This model is the classification head described in:
+ Prototypical Networks for Few-shot Learning
+ (Snell et al., NIPS 2017).
+
+ Parameters:
+ query: a (tasks_per_batch, n_query, d) Tensor.
+ support: a (tasks_per_batch, n_support, d) Tensor.
+ support_labels: a (tasks_per_batch, n_support) Tensor.
+ n_way: a scalar. Represents the number of classes in a few-shot classification task.
+ n_shot: a scalar. Represents the number of support examples given per class.
+ normalize: a boolean. Represents whether if we want to normalize the distances by the embedding dimension.
+ Returns: a (tasks_per_batch, n_query, n_way) Tensor.
+ """
+ scale = 10
+ tasks_per_batch = query.size(0)
+ n_support = support.size(1)
+ n_query = query.size(1)
+ d = query.size(2)
+
+ assert (query.dim() == 3)
+ assert (support.dim() == 3)
+ assert (query.size(0) == support.size(0) and query.size(2) == support.size(2))
+ assert (n_support == n_way * n_shot) # n_support must equal to n_way * n_shot
+
+ support_labels_one_hot = one_hot(support_labels.view(tasks_per_batch * n_support), n_way)
+ support_labels_one_hot = support_labels_one_hot.view(tasks_per_batch, n_support, n_way)
+
+ # From:
+ # https://github.com/gidariss/FewShotWithoutForgetting/blob/master/architectures/PrototypicalNetworksHead.py
+ # ************************* Compute Prototypes **************************
+ labels_train_transposed = support_labels_one_hot.transpose(1, 2)
+
+ prototypes = torch.bmm(labels_train_transposed, support)
+ # Divide with the number of examples per novel category.
+ prototypes = prototypes.div(
+ labels_train_transposed.sum(dim=2, keepdim=True).expand_as(prototypes)
+ )
+ if is_scale:
+ boost_prototypes, _ = meta_part_infer(prototypes.reshape(-1, d), k_all.reshape(-1), use_scale=is_scale, is_infer=is_scale)
+ boost_prototypes = boost_prototypes.reshape(tasks_per_batch, n_way, d)
+ else:
+ boost_prototypes = meta_part_infer(prototypes.reshape(-1, d), k_all.reshape(-1), use_scale=is_scale, is_infer=is_scale)
+ boost_prototypes = boost_prototypes[0].reshape(tasks_per_batch, n_way, d)
+
+ logits = torch.nn.functional.cosine_similarity(query.unsqueeze(2).expand(-1, -1, prototypes.shape[1], -1),
+ prototypes.unsqueeze(1).expand(-1, query.shape[1], -1, -1), dim=-1)
+ assign_1 = F.softmax(logits * scale, dim=-1)
+ assign_1 = torch.cat([support_labels_one_hot, assign_1], dim=1)
+ assign_1_transposed = assign_1.transpose(1, 2)
+ emb = torch.cat([support, query], dim=1)
+ mean_1 = torch.bmm(assign_1_transposed, emb)
+ mean_1 = mean_1.div(
+ assign_1_transposed.sum(dim=2, keepdim=True).expand_as(mean_1)
+ )
+ diff = torch.pow(emb.unsqueeze(1).expand(-1, n_way, -1, -1) - mean_1.unsqueeze(2).expand(-1, -1, emb.shape[1], -1), 2)
+ std_1 = (assign_1_transposed.unsqueeze(-1).expand_as(diff) * diff).sum(dim=2) / assign_1_transposed.unsqueeze(-1).expand_as(diff).sum(dim=2)
+
+ logits = torch.nn.functional.cosine_similarity(query.unsqueeze(2).expand(-1, -1, boost_prototypes.shape[1], -1),
+ boost_prototypes.unsqueeze(1).expand(-1, query.shape[1], -1, -1), dim=-1)
+ assign_2 = F.softmax(logits * scale, dim=-1)
+ assign_2 = torch.cat([support_labels_one_hot, assign_2], dim=1)
+ assign_2_transposed = assign_2.transpose(1, 2)
+ emb = torch.cat([support, query], dim=1)
+ mean_2 = torch.bmm(assign_2_transposed, emb)
+ mean_2 = mean_2.div(
+ assign_2_transposed.sum(dim=2, keepdim=True).expand_as(mean_2)
+ )
+ diff = torch.pow(emb.unsqueeze(1).expand(-1, n_way, -1, -1) - mean_2.unsqueeze(2).expand(-1, -1, emb.shape[1], -1), 2)
+ std_2 = (assign_2_transposed.unsqueeze(-1).expand_as(diff) * diff).sum(dim=2) / assign_2_transposed.unsqueeze(-1).expand_as(diff).sum(dim=2)
+
+ prototypes = (mean_1 * std_2 + mean_2 * std_1) / (std_2 + std_1)
+ logits = torch.nn.functional.cosine_similarity(query.unsqueeze(2).expand(-1, -1, prototypes.shape[1], -1),
+ prototypes.unsqueeze(1).expand(-1, query.shape[1], -1, -1), dim=-1)
+ # Distance Matrix Vectorization Trick
+ return logits
+
+class ClassificationHead(nn.Module):
+ def __init__(self, base_learner='MetaOptNet', enable_scale=True):
+ super(ClassificationHead, self).__init__()
+ if ('Cosine' in base_learner):
+ self.head = CosineNetHead
+ elif ('FuseCos' in base_learner):
+ self.head = FuseCosineNetHead
+ else:
+ print ("Cannot recognize the base learner type")
+ assert(False)
+
+ # Add a learnable scale
+ self.enable_scale = enable_scale
+ self.scale = nn.Parameter(torch.FloatTensor([1.0]))
+
+ def forward(self, k_all, meta_part_infer, query, support, support_labels, n_way, n_shot, **kwargs):
+ if self.enable_scale:
+ return self.scale * self.head(k_all, meta_part_infer, query, support, support_labels, n_way, n_shot, **kwargs)
+ else:
+ return self.head(k_all, meta_part_infer, query, support, support_labels, n_way, n_shot, **kwargs)
\ No newline at end of file
diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/classification_heads_orgin.py b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/classification_heads_orgin.py
new file mode 100644
index 0000000000000000000000000000000000000000..dc6dffbfabc0321eec9bd084ff5dc908603f6e12
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/classification_heads_orgin.py
@@ -0,0 +1,135 @@
+#
+# BSD 3-Clause License
+#
+# Copyright (c) 2017 xxxx
+# All rights reserved.
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# * Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# * Neither the name of the copyright holder nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+# ============================================================================
+#
+import os
+import sys
+
+import torch
+from torch.autograd import Variable
+import torch.nn as nn
+import torch.npu
+import os
+NPU_CALCULATE_DEVICE = 0
+if os.getenv('NPU_CALCULATE_DEVICE') and str.isdigit(os.getenv('NPU_CALCULATE_DEVICE')):
+ NPU_CALCULATE_DEVICE = int(os.getenv('NPU_CALCULATE_DEVICE'))
+if torch.npu.current_device() != NPU_CALCULATE_DEVICE:
+ torch.npu.set_device(f'npu:{NPU_CALCULATE_DEVICE}')
+
+def one_hot(indices, depth):
+ """
+ Returns a one-hot tensor.
+ This is a PyTorch equivalent of Tensorflow's tf.one_hot.
+
+ Parameters:
+ indices: a (n_batch, m) Tensor or (m) Tensor.
+ depth: a scalar. Represents the depth of the one hot dimension.
+ Returns: a (n_batch, m, depth) Tensor or (m, depth) Tensor.
+ """
+
+ encoded_indicies = torch.zeros(indices.size() + torch.Size([depth])).npu()
+ index = indices.view(indices.size()+torch.Size([1]))
+ encoded_indicies = encoded_indicies.scatter_(1,index,1)
+
+ return encoded_indicies
+
+def CosineNetHead(query, support, support_labels, n_way, n_shot, normalize=True):
+ """
+ Constructs the prototype representation of each class(=mean of support vectors of each class) and
+ returns the classification score (=L2 distance to each class prototype) on the query set.
+
+ This model is the classification head described in:
+ Prototypical Networks for Few-shot Learning
+ (Snell et al., NIPS 2017).
+
+ Parameters:
+ query: a (tasks_per_batch, n_query, d) Tensor.
+ support: a (tasks_per_batch, n_support, d) Tensor.
+ support_labels: a (tasks_per_batch, n_support) Tensor.
+ n_way: a scalar. Represents the number of classes in a few-shot classification task.
+ n_shot: a scalar. Represents the number of support examples given per class.
+ normalize: a boolean. Represents whether if we want to normalize the distances by the embedding dimension.
+ Returns: a (tasks_per_batch, n_query, n_way) Tensor.
+ """
+
+ tasks_per_batch = query.size(0)
+ n_support = support.size(1)
+ n_query = query.size(1)
+ d = query.size(2)
+
+ assert (query.dim() == 3)
+ assert (support.dim() == 3)
+ assert (query.size(0) == support.size(0) and query.size(2) == support.size(2))
+ assert (n_support == n_way * n_shot) # n_support must equal to n_way * n_shot
+
+ support_labels_one_hot = one_hot(support_labels.view(tasks_per_batch * n_support), n_way)
+ support_labels_one_hot = support_labels_one_hot.view(tasks_per_batch, n_support, n_way)
+
+ # From:
+ # https://github.com/gidariss/FewShotWithoutForgetting/blob/master/architectures/PrototypicalNetworksHead.py
+ # ************************* Compute Prototypes **************************
+ labels_train_transposed = support_labels_one_hot.transpose(1, 2)
+ # Batch matrix multiplication:
+ # prototypes = labels_train_transposed * features_train ==>
+ # [batch_size x nKnovel x num_channels] =
+ # [batch_size x nKnovel x num_train_examples] * [batch_size * num_train_examples * num_channels]
+ prototypes = torch.bmm(labels_train_transposed, support)
+ # Divide with the number of examples per novel category.
+ prototypes = prototypes.div(
+ labels_train_transposed.sum(dim=2, keepdim=True).expand_as(prototypes)
+ )
+
+ # Distance Matrix Vectorization Trick
+ logits = torch.nn.functional.cosine_similarity(query.unsqueeze(2).expand(-1, -1, prototypes.shape[1], -1),
+ prototypes.unsqueeze(1).expand(-1, query.shape[1], -1, -1), dim=-1)
+
+
+ return logits
+
+class ClassificationHead(nn.Module):
+ def __init__(self, base_learner='MetaOptNet', enable_scale=True):
+ super(ClassificationHead, self).__init__()
+ if ('Cosine' in base_learner):
+ self.head = CosineNetHead
+ else:
+ print ("Cannot recognize the base learner type")
+ assert(False)
+
+ # Add a learnable scale
+ self.enable_scale = enable_scale
+ self.scale = nn.Parameter(torch.FloatTensor([1.0]))
+
+ def forward(self, query, support, support_labels, n_way, n_shot, **kwargs):
+ if self.enable_scale:
+ return self.scale * self.head(query, support, support_labels, n_way, n_shot, **kwargs)
+ else:
+ return self.head(query, support, support_labels, n_way, n_shot, **kwargs)
diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/meta_part_inference_mini.py b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/meta_part_inference_mini.py
new file mode 100644
index 0000000000000000000000000000000000000000..26d448aabe85d66cc3281761dd76dcaa8bff8e8f
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/meta_part_inference_mini.py
@@ -0,0 +1,178 @@
+#
+# BSD 3-Clause License
+#
+# Copyright (c) 2017 xxxx
+# All rights reserved.
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# * Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# * Neither the name of the copyright holder nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+# ============================================================================
+#
+import torch.nn as nn
+import math
+import pickle
+import numpy as np
+import scipy.sparse as sp
+import torch
+import torch.nn.functional as F
+import os
+import random
+import torch.npu
+import os
+NPU_CALCULATE_DEVICE = 0
+if os.getenv('NPU_CALCULATE_DEVICE') and str.isdigit(os.getenv('NPU_CALCULATE_DEVICE')):
+ NPU_CALCULATE_DEVICE = int(os.getenv('NPU_CALCULATE_DEVICE'))
+if torch.npu.current_device() != NPU_CALCULATE_DEVICE:
+ torch.npu.set_device(f'npu:{NPU_CALCULATE_DEVICE}')
+
+class ProtoComNet(nn.Module):
+ def __init__(self, opt, in_dim=1600):
+ super(ProtoComNet, self).__init__()
+ self.encoder = nn.Sequential(
+ nn.Linear(in_features=in_dim, out_features=in_dim//2),
+ nn.ReLU(inplace=True),
+ )
+ self.decoder = nn.Sequential(
+ nn.Linear(in_features=in_dim//2, out_features=512),
+ nn.ReLU(inplace=True),
+ nn.Linear(in_features=512, out_features=in_dim)
+ )
+ self.aggregator = nn.Sequential(
+ nn.Linear(in_features=600+512, out_features=300),
+ nn.ReLU(inplace=True),
+ nn.Linear(in_features=300, out_features=1)
+ )
+ with open('./data/mini_imagenet_part_prior.pickle', 'rb') as handle:
+ part_prior = pickle.load(handle)
+ self.part_prior = part_prior
+ edges = np.array(part_prior['edges'])
+ n = len(part_prior['wnids'])
+ self.adj = sp.coo_matrix((np.ones(len(edges)), (edges[:, 0], edges[:, 1])),
+ shape=(n, n), dtype='float32')
+ self.adj = self.adj.todense()
+ self.adj = torch.from_numpy(self.adj).npu()
+
+ train_class_name_file = './data/mini_imagenet_catname2label_train.pickle'
+ val_class_name_file = './data/mini_imagenet_catname2label_val.pickle'
+ test_class_name_file = './data/mini_imagenet_catname2label_test.pickle'
+ with open(train_class_name_file, 'rb') as handle:
+ catname2label_train = pickle.load(handle)
+ with open(val_class_name_file, 'rb') as handle:
+ catname2label_val = pickle.load(handle)
+ with open(test_class_name_file, 'rb') as handle:
+ catname2label_test = pickle.load(handle)
+ self.catname2label = dict(catname2label_train, **catname2label_val)
+ self.catname2label = dict(self.catname2label, **catname2label_test)
+ self.label2catname = {v: k for k, v in self.catname2label.items()}
+ word_vectors = torch.tensor(part_prior['vectors']).npu()
+ word_vectors = F.normalize(word_vectors)
+ semantic_feature_0 = word_vectors.unsqueeze(dim=1).expand(-1, n, -1)
+ semantic_feature_1 = word_vectors.unsqueeze(dim=0).expand(n, -1, -1)
+ self.semantic_feature = torch.cat([semantic_feature_0, semantic_feature_1], dim=-1)
+ try:
+ with open(os.path.join(opt.save_path, "mini_imagenet_metapart_feature.pickle"), 'rb') as handle:
+ self.metapart_feature = pickle.load(handle)
+
+ with open(os.path.join(opt.save_path, "mini_imagenet_class_feature.pickle"), 'rb') as handle:
+ self.class_feature = pickle.load(handle)
+ except:
+ print('no found ' + os.path.join(opt.save_path, "mini_imagenet_metapart_feature.pickle")
+ + ' ' + os.path.join(opt.save_path, "mini_imagenet_class_feature.pickle"))
+ self.n = n
+ self.in_dim = in_dim
+
+ def forward(self, x, y, use_scale=False, is_infer=False):
+ if is_infer == False:
+ nb = x.shape[0]
+ outputs = []
+ targets = []
+ for i in range(nb):
+ input_feature = torch.zeros(self.n, self.in_dim).npu()
+ for k, v in self.metapart_feature.items():
+ input_feature[k:k+1, :] = self.reparameterize(v['mean'], v['std'])
+ input_feature[self.part_prior['wnids2id'][self.label2catname[y[i].item()]]:
+ self.part_prior['wnids2id'][self.label2catname[y[i].item()]] + 1, :] = x[i:i+1, :]
+
+ semantic_feature = self.semantic_feature[self.part_prior['wnids2id'][self.label2catname[y[i].item()]]:
+ self.part_prior['wnids2id'][self.label2catname[y[i].item()]]+1, :, :]
+
+ semantic_feature = torch.cat([semantic_feature, x[i:i+1, :].unsqueeze(0).expand(-1, self.n, -1)], dim=-1)
+ fuse_adj = self.aggregator(semantic_feature).squeeze(dim=-1)
+
+ fuse_adj = self.adj[self.part_prior['wnids2id'][self.label2catname[y[i].item()]]:
+ self.part_prior['wnids2id'][self.label2catname[y[i].item()]] + 1, :] * fuse_adj
+
+ eye = 1 - torch.eye(self.adj.shape[0]).type_as(fuse_adj)
+ adj = fuse_adj * eye[self.part_prior['wnids2id'][self.label2catname[y[i].item()]]:
+ self.part_prior['wnids2id'][self.label2catname[y[i].item()]] + 1, :] + torch.eye(
+ self.adj.shape[0]).type_as(fuse_adj)[self.part_prior['wnids2id'][self.label2catname[y[i].item()]]:
+ self.part_prior['wnids2id'][
+ self.label2catname[y[i].item()]] + 1, :]
+
+ z = self.encoder(input_feature)
+ g = torch.mm(adj, z)
+ out = self.decoder(g)
+ outputs.append(out)
+ targets.append(self.class_feature[y[i].item()]['mean'])
+ outputs = torch.cat(outputs, dim=0)
+ targets = torch.cat(targets, dim=0)
+ return outputs, targets
+ else:
+ nb = x.shape[0]
+ outputs = []
+ for i in range(nb):
+ input_feature = torch.zeros(self.n, self.in_dim).npu()
+ for k, v in self.metapart_feature.items():
+ input_feature[k:k + 1, :] = v['mean']
+ input_feature[self.part_prior['wnids2id'][self.label2catname[y[i].item()]]:
+ self.part_prior['wnids2id'][self.label2catname[y[i].item()]] + 1, :] = x[i:i + 1, :]
+
+ semantic_feature = self.semantic_feature[self.part_prior['wnids2id'][self.label2catname[y[i].item()]]:
+ self.part_prior['wnids2id'][
+ self.label2catname[y[i].item()]] + 1, :, :]
+ semantic_feature = torch.cat([semantic_feature, x[i:i + 1, :].unsqueeze(0).expand(-1, self.n, -1)],
+ dim=-1)
+ fuse_adj = self.aggregator(semantic_feature).squeeze(dim=-1)
+ fuse_adj = self.adj[self.part_prior['wnids2id'][self.label2catname[y[i].item()]]:
+ self.part_prior['wnids2id'][self.label2catname[y[i].item()]] + 1, :] * fuse_adj
+ eye = 1 - torch.eye(self.adj.shape[0]).type_as(fuse_adj)
+ adj = fuse_adj * eye[self.part_prior['wnids2id'][self.label2catname[y[i].item()]]:
+ self.part_prior['wnids2id'][self.label2catname[y[i].item()]] + 1, :] + torch.eye(
+ self.adj.shape[0]).type_as(fuse_adj)[self.part_prior['wnids2id'][self.label2catname[y[i].item()]]:
+ self.part_prior['wnids2id'][
+ self.label2catname[y[i].item()]] + 1, :]
+ z = self.encoder(input_feature)
+ out = torch.mm(adj, z)
+ out = self.decoder(out)
+ outputs.append(out)
+ outputs = torch.cat(outputs, dim=0)
+
+ return outputs, None
+
+ def reparameterize(self, mu, var):
+ std = var
+ eps = torch.randn_like(std)
+ return mu + eps*std
diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/resnet12_2.py b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/resnet12_2.py
new file mode 100644
index 0000000000000000000000000000000000000000..93611df64940ce5e9bde2694aa34bac6f8957843
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/models/resnet12_2.py
@@ -0,0 +1,141 @@
+#
+# BSD 3-Clause License
+#
+# Copyright (c) 2017 xxxx
+# All rights reserved.
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# * Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# * Neither the name of the copyright holder nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+# ============================================================================
+#
+import torch.nn as nn
+import torch.npu
+import os
+NPU_CALCULATE_DEVICE = 0
+if os.getenv('NPU_CALCULATE_DEVICE') and str.isdigit(os.getenv('NPU_CALCULATE_DEVICE')):
+ NPU_CALCULATE_DEVICE = int(os.getenv('NPU_CALCULATE_DEVICE'))
+if torch.npu.current_device() != NPU_CALCULATE_DEVICE:
+ torch.npu.set_device(f'npu:{NPU_CALCULATE_DEVICE}')
+
+def conv3x3(in_planes, out_planes):
+ return nn.Conv2d(in_planes, out_planes, 3, padding=1, bias=False)
+
+
+def conv1x1(in_planes, out_planes):
+ return nn.Conv2d(in_planes, out_planes, 1, bias=False)
+
+
+def norm_layer(planes):
+ return nn.BatchNorm2d(planes)
+
+
+class Block(nn.Module):
+
+ def __init__(self, inplanes, planes, downsample, use_relu=True):
+ super().__init__()
+
+ self.use_relu = use_relu
+ self.relu = nn.LeakyReLU(0.1)
+
+ self.conv1 = conv3x3(inplanes, planes)
+ self.bn1 = norm_layer(planes)
+ self.conv2 = conv3x3(planes, planes)
+ self.bn2 = norm_layer(planes)
+ self.conv3 = conv3x3(planes, planes)
+ self.bn3 = norm_layer(planes)
+
+ self.downsample = downsample
+
+ self.maxpool = nn.MaxPool2d(2)
+
+ def forward(self, x):
+ out = self.conv1(x)
+ out = self.bn1(out)
+ out = self.relu(out)
+
+ out = self.conv2(out)
+ out = self.bn2(out)
+ out = self.relu(out)
+
+ out = self.conv3(out)
+ out = self.bn3(out)
+
+ identity = self.downsample(x)
+
+ out += identity
+ if self.use_relu:
+ out = self.relu(out)
+
+ out = self.maxpool(out)
+
+ return out
+
+
+class ResNet12(nn.Module):
+
+ def __init__(self, channels):
+ super().__init__()
+
+ self.inplanes = 3
+
+ self.layer1 = self._make_layer(channels[0])
+ self.layer2 = self._make_layer(channels[1])
+ self.layer3 = self._make_layer(channels[2])
+ self.layer4 = self._make_layer(channels[3], use_relu=False)
+
+ self.out_dim = channels[3]
+
+ for m in self.modules():
+ if isinstance(m, nn.Conv2d):
+ nn.init.kaiming_normal_(m.weight, mode='fan_out',
+ nonlinearity='leaky_relu')
+ elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
+ nn.init.constant_(m.weight, 1)
+ nn.init.constant_(m.bias, 0)
+
+ def _make_layer(self, planes, use_relu=True):
+ downsample = nn.Sequential(
+ conv1x1(self.inplanes, planes),
+ norm_layer(planes),
+ )
+ block = Block(self.inplanes, planes, downsample, use_relu=use_relu)
+ self.inplanes = planes
+ return block
+
+ def forward(self, x, use_pool=True):
+ x = self.layer1(x)
+ x = self.layer2(x)
+ x = self.layer3(x)
+ x = self.layer4(x)
+ if use_pool:
+ x = x.view(x.shape[0], x.shape[1], -1).mean(dim=2)
+ return x
+
+def resnet12():
+ return ResNet12([64, 128, 256, 512])
+
+def resnet12_wide():
+ return ResNet12([64, 160, 320, 640])
diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/prior/glove.py b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/prior/glove.py
new file mode 100644
index 0000000000000000000000000000000000000000..2e3a9cd0da47dfc7cb7a34a7947169cd691f0268
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/prior/glove.py
@@ -0,0 +1,101 @@
+#
+# BSD 3-Clause License
+#
+# Copyright (c) 2017 xxxx
+# All rights reserved.
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# * Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# * Neither the name of the copyright holder nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+# ============================================================================
+#
+import torch
+import torch.npu
+import os
+NPU_CALCULATE_DEVICE = 0
+if os.getenv('NPU_CALCULATE_DEVICE') and str.isdigit(os.getenv('NPU_CALCULATE_DEVICE')):
+ NPU_CALCULATE_DEVICE = int(os.getenv('NPU_CALCULATE_DEVICE'))
+if torch.npu.current_device() != NPU_CALCULATE_DEVICE:
+ torch.npu.set_device(f'npu:{NPU_CALCULATE_DEVICE}')
+
+
+class GloVe():
+
+ def __init__(self, file_path):
+ self.dimension = None
+ self.embedding = dict()
+ with open(file_path, 'r') as f:
+ for line in f.readlines():
+ strs = line.rstrip().split(' ')
+ word = strs[0].lower()
+ vector = torch.FloatTensor(list(map(float, strs[1:])))
+ self.embedding[word] = vector
+ if self.dimension is None:
+ self.dimension = len(vector)
+
+ def _fix_word(self, word):
+ terms = word.replace('_', ' ').split(' ')
+ ret = self.zeros()
+ cnt = 0
+ for term in terms:
+ v = self.embedding.get(term)
+ if v is None:
+ subterms = term.split('-')
+ subterm_sum = self.zeros()
+ subterm_cnt = 0
+ for subterm in subterms:
+ subv = self.embedding.get(subterm)
+ if subv is not None:
+ subterm_sum += subv
+ subterm_cnt += 1
+ if subterm_cnt > 0:
+ v = subterm_sum / subterm_cnt
+ if v is not None:
+ ret += v
+ cnt += 1
+ return ret / cnt if cnt > 0 else None
+
+ def __getitem__(self, words):
+ if type(words) is str:
+ words = [words]
+ ret = self.zeros()
+ cnt = 0
+ for word in words:
+ word = word.lower()
+ # print(word)
+ v = self.embedding.get(word)
+ if v is None:
+ v = self._fix_word(word)
+ if v is not None:
+ ret += v
+ cnt += 1
+ if cnt > 0:
+ return ret / cnt
+ else:
+ return self.zeros()
+
+ def zeros(self):
+ return torch.zeros(self.dimension)
+
diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/prior/make_miniimagenet_primitive_knowledge.py b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/prior/make_miniimagenet_primitive_knowledge.py
new file mode 100644
index 0000000000000000000000000000000000000000..93dd5f2f05479eb69ddf11549f41628caab48f0a
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/prior/make_miniimagenet_primitive_knowledge.py
@@ -0,0 +1,224 @@
+#
+# BSD 3-Clause License
+#
+# Copyright (c) 2017 xxxx
+# All rights reserved.
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# * Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# * Neither the name of the copyright holder nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+# ============================================================================
+#
+import argparse
+import json
+import pickle
+
+from nltk.corpus import wordnet as wn
+import torch
+import numpy as np
+
+from prior.glove import GloVe
+import torch.npu
+import os
+NPU_CALCULATE_DEVICE = 0
+if os.getenv('NPU_CALCULATE_DEVICE') and str.isdigit(os.getenv('NPU_CALCULATE_DEVICE')):
+ NPU_CALCULATE_DEVICE = int(os.getenv('NPU_CALCULATE_DEVICE'))
+if torch.npu.current_device() != NPU_CALCULATE_DEVICE:
+ torch.npu.set_device(f'npu:{NPU_CALCULATE_DEVICE}')
+
+
+def getnode(x):
+ return wn.synset_from_pos_and_offset('n', int(x[1:]))
+
+
+def getwnid(u):
+ s = str(u.offset())
+ return 'n' + (8 - len(s)) * '0' + s
+
+
+def constructedges(s, syns2id):
+ edges = []
+ for k, vs in s.items():
+ for v in vs:
+ edges.append((syns2id[k], syns2id[v]))
+ return edges
+
+def make_attribute_node(syns, train_nodes, val_nodes, test_nodes):
+ syns_paths = []
+ syns_len = len(syns)
+ for i in range(syns_len):
+ if i == 96:
+ print('stop')
+ paths = syns[i].hypernym_paths()
+ syns_paths.extend(paths)
+ print('number {}: {}'.format(i, [path[4].lemma_names for path in paths]))
+ for i in range(20):
+ try:
+ syns_i = [path[i] for path in syns_paths]
+ print('number {}: {}'.format(i, len(set(syns_i))))
+ except:
+ print('number {}: {}'.format(i, 0))
+
+ attrbute = []
+ syns_attrbute = {}
+ syns_len = len(syns)
+ for i in range(syns_len):
+ attrbute.append(syns[i])
+ for i in range(syns_len):
+ syns_attrbute[syns[i]] = []
+ have_attri = False
+ for paths in syns[i].hypernym_paths():
+ for sys in paths:
+ parts = sys.part_meronyms()
+ # parts = sys.substance_meronyms()
+ attrbute.extend(parts)
+ syns_attrbute[syns[i]].extend(parts)
+ if len(parts) != 0:
+ have_attri = True
+ if have_attri:
+ print('number {}: {}'.format(i, 'attribute'))
+ else:
+ print('number {}: {}'.format(i, 'no attribute'))
+ syns_attrbute[syns[i]] = list(set(syns_attrbute[syns[i]]))
+ attrbute = list(set(attrbute))
+
+ # 鑾峰緱姣忎釜鏁版嵁闆嗕笅鐨勫睘鎬
+ train_attribute = []
+ for i in range(len(train_nodes)):
+ train_attribute.extend(syns_attrbute[train_nodes[i]])
+ train_attribute = list(set(train_attribute))
+
+ val_attribute = []
+ for i in range(len(val_nodes)):
+ val_attribute.extend(syns_attrbute[val_nodes[i]])
+ val_attribute = list(set(val_attribute))
+
+ test_attribute = []
+ for i in range(len(test_nodes)):
+ test_attribute.extend(syns_attrbute[test_nodes[i]])
+ test_attribute = list(set(test_attribute))
+
+ attrbute_rm = []
+ syns_attrbute_rm = {}
+ train_attribute_rm = []
+ val_attribute_rm = []
+ test_attribute_rm = []
+
+ for attr in train_attribute:
+ if attr in val_attribute or attr in test_attribute:
+ train_attribute_rm.append(attr)
+
+ for attr in val_attribute:
+ if attr in train_attribute:
+ val_attribute_rm.append(attr)
+
+ for attr in test_attribute:
+ if attr in train_attribute:
+ test_attribute_rm.append(attr)
+
+ attrbute_rm = syns + list(set(train_attribute_rm + val_attribute_rm + test_attribute_rm))
+ for syn in syns:
+ attrs = syns_attrbute[syn]
+ syns_attrbute_rm[syn] = [attr for attr in attrs if attr in list(set(train_attribute_rm + val_attribute_rm + test_attribute_rm))] + [syn, ]
+
+ return attrbute_rm, syns_attrbute_rm
+
+if __name__ == '__main__':
+ output = '../data/mini_imagenet_part_prior_train.pickle'
+ train_class_name_file = '../data/mini_imagenet_catname2label_train.pickle'
+ val_class_name_file = '../data/mini_imagenet_catname2label_val.pickle'
+ test_class_name_file = '../data/mini_imagenet_catname2label_test.pickle'
+ print('making graph ...')
+
+ with open(train_class_name_file, 'rb') as handle:
+ catname2label_train = pickle.load(handle)
+ wnids_train = catname2label_train.keys()
+ with open(val_class_name_file, 'rb') as handle:
+ catname2label_val = pickle.load(handle)
+ wnids_val = catname2label_val.keys()
+ with open(test_class_name_file, 'rb') as handle:
+ catname2label_test = pickle.load(handle)
+ wnids_test = catname2label_test.keys()
+
+ # all_wnids = list(wnids_train) + list(wnids_val)
+ all_wnids = list(wnids_train) + list(wnids_val) + list(wnids_test)
+ all_wnids = list(np.unique(all_wnids))
+
+ all_nodes = list(map(getnode, all_wnids))
+ train_nodes = list(map(getnode, list(wnids_train)))
+ val_nodes = list(map(getnode, list(wnids_val)))
+ test_nodes = list(map(getnode, list(wnids_test)))
+ # all_set = set(all_nodes)
+
+ attribute_node, node_attribute_dict = make_attribute_node(all_nodes,
+ train_nodes,
+ val_nodes,
+ test_nodes)
+
+ wnids = list(map(getwnid, attribute_node))
+ wnids2id = {wnid:i for i, wnid in enumerate(wnids)}
+ id2wnids = {v: k for k, v in wnids2id.items()}
+ syns2id = {getnode(wnid): i for i, wnid in id2wnids.items()}
+ edges = constructedges(node_attribute_dict, syns2id)
+ class_attribute_id_dict = {syns2id[k]: [syns2id[v] for v in vs] for k, vs in node_attribute_dict.items()}
+ attribute_id_class_dict = {}
+ for k, vs in class_attribute_id_dict.items():
+ for v in vs:
+ if v in attribute_id_class_dict.keys():
+ attribute_id_class_dict[v].append(k)
+ else:
+ attribute_id_class_dict[v] = [k, ]
+
+ print('making glove embedding ...')
+
+ glove = GloVe('/extend/zhangbq/code/datasets/few_shot_data/glove.840B.300d.txt')
+ vectors = []
+ num = 0
+ for wnid in wnids:
+ # print(getnode(wnid).lemma_names())
+ vectors.append(glove[getnode(wnid).lemma_names()])
+ if torch.sum(torch.abs(vectors[-1])) == 0:
+ print('wnid: {}锛寋}'.format(wnid, getnode(wnid).lemma_names()))
+ num+=1
+ print(num)
+ vectors = torch.stack(vectors)
+
+ print('dumping ...')
+
+ obj = {}
+ obj['all_wnids'] = all_wnids
+ obj['wnids_train'] = list(wnids_train)
+ obj['wnids_val'] = list(wnids_val)
+ obj['wnids_test'] = list(wnids_test)
+ obj['wnids'] = wnids
+ obj['vectors'] = vectors.tolist()
+ obj['wnids2id'] = wnids2id
+ obj['id2wnids'] = id2wnids
+ obj['class_attribute_id_dict'] = class_attribute_id_dict
+ obj['attribute_id_class_dict'] = attribute_id_class_dict
+ obj['edges'] = edges
+ with open(output, 'wb') as handle:
+ pickle.dump(obj, handle, protocol=pickle.HIGHEST_PROTOCOL)
+
diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/test/train_full_1p.sh b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/test/train_full_1p.sh
new file mode 100644
index 0000000000000000000000000000000000000000..9a82fa47387e4974edaebd6e66a2cf7624393303
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/test/train_full_1p.sh
@@ -0,0 +1,202 @@
+#!/bin/bash
+
+#当前路径,不需要修改
+cur_path=`pwd`
+#export ASCEND_SLOG_PRINT_TO_STDOUT=1
+export NPU_CALCULATE_DEVICE=$ASCEND_DEVICE_ID
+#集合通信参数,不需要修改
+
+export RANK_SIZE=1
+export JOB_ID=10087
+RANK_ID_START=0
+
+#export HCCL_WHITELIST_DISABLE=1
+#export MASTER_ADDR=127.0.0.1
+#export MASTER_PORT=23456
+#export RANK=0
+#export WORLD_SIZE=1
+
+#进入到conda环境
+#export PATH=/usr/local/python3.7.5/bin:/home/anaconda3/bin:$PATH
+#source activate py8
+
+
+# 数据集路径,保持为空,不需要修改
+data_path=""
+
+#基础参数,需要模型审视修改
+#网络名称,同目录名称
+Network="Prototype-Completion_ID2464_for_PyTorch"
+#训练epoch
+train_epochs=100
+#训练batch_size
+batch_size=128
+#训练step
+#train_steps=`expr 1281167 / ${batch_size}`
+#学习率
+learning_rate=0.495
+
+#TF2.X独有,不需要修改
+#export NPU_LOOP_SIZE=${train_steps}
+
+#维测参数,precision_mode需要模型审视修改
+precision_mode="allow_mix_precision"
+#维持参数,以下不需要修改
+over_dump=False
+data_dump_flag=False
+data_dump_step="10"
+profiling=False
+autotune=False
+
+# 帮助信息,不h需要修改
+if [[ $1 == --help || $1 == -h ]];then
+ echo"usage:./train_full_1p.sh "
+ echo " "
+ echo "parameter explain:
+ --precision_mode precision mode(allow_fp32_to_fp16/force_fp16/must_keep_origin_dtype/allow_mix_precision)
+ --over_dump if or not over detection, default is False
+ --data_dump_flag data dump flag, default is False
+ --data_dump_step data dump step, default is 10
+ --profiling if or not profiling for performance debug, default is False
+ --data_path source data of training
+ -h/--help show help message
+ "
+ exit 1
+fi
+
+#参数校验,不需要修改
+for para in $*
+do
+ if [[ $para == --precision_mode* ]];then
+ precision_mode=`echo ${para#*=}`
+elif [[ $para == --over_dump* ]];then
+ over_dump=`echo ${para#*=}`
+ over_dump_path=${cur_path}/output/overflow_dump
+ mkdir -p ${over_dump_path}
+ elif [[ $para == --data_dump_flag* ]];then
+ data_dump_flag=`echo ${para#*=}`
+ data_dump_path=${cur_path}/output/data_dump
+ mkdir -p ${data_dump_path}
+ elif [[ $para == --data_dump_step* ]];then
+ data_dump_step=`echo ${para#*=}`
+ elif [[ $para == --profiling* ]];then
+ profiling=`echo ${para#*=}`
+ profiling_dump_path=${cur_path}/output/profiling
+ mkdir -p ${profiling_dump_path}
+ elif [[ $para == --data_path* ]];then
+ data_path=`echo ${para#*=}`
+ fi
+done
+
+#校验是否传入data_path,不需要修改
+if [[ $data_path == "" ]];then
+ echo "[Error] para \"data_path\" must be confing"
+ exit 1
+fi
+
+#训练开始时间,不需要修改
+start_time=$(date +%s)
+
+#进入训练脚本目录,需要模型审视修改
+cd $cur_path/../
+
+
+sed -i "s|../datasets/few_shot_data|$data_path|g" data/mini_imagenet.py
+#sed -i "s|_C.TRAIN.EPOCHS = 300|_C.TRAIN.EPOCHS = 1|g" config.py
+#sed -i "s|EPOCHS = 75|EPOCHS = 1|g" config.py
+
+#cd pytorch-timm
+#yes|pip3 uninstall timm
+#python3 setup.py develop > $cur_path/../install.txt
+
+#mkdir -p checkpoints
+#mkdir -p /root/.cache/torch/hub/checkpoints
+#cp $data_path/fcn_* /root/.cache/torch/hub/checkpoints
+
+for((RANK_ID=$RANK_ID_START;RANK_ID<$((RANK_SIZE+RANK_ID_START));RANK_ID++));
+do
+ #设置环境变量,不需要修改
+ echo "Device ID: $ASCEND_DEVICE_ID"
+ export RANK_ID=$RANK_ID
+
+
+
+ #创建DeviceID输出目录,不需要修改
+ if [ -d ${cur_path}/output/${ASCEND_DEVICE_ID} ];then
+ rm -rf ${cur_path}/output/${ASCEND_DEVICE_ID}
+ mkdir -p ${cur_path}/output/$ASCEND_DEVICE_ID/ckpt
+ else
+ mkdir -p ${cur_path}/output/$ASCEND_DEVICE_ID/ckpt
+ fi
+
+ #绑核,不需要绑核的模型删除,需要绑核的模型根据实际修改
+ #cpucount=`lscpu | grep "CPU(s):" | head -n 1 | awk '{print $2}'`
+ #cpustep=`expr $cpucount / 8`
+ #echo "taskset c steps:" $cpustep
+ #let a=RANK_ID*$cpustep
+ #let b=RANK_ID+1
+ #let c=b*$cpustep-1
+
+ #执行训练脚本,以下传参不需要修改,其他需要模型审视修改
+ nohup python3 main.py --phase pretrain --gpu 0 --save-path "./experiments/meta_part_resnet12_mini" \
+--head CosineNet --network ResNet --pre_head LinearNet --dataset miniImageNet --num-epoch $train_epochs > ${cur_path}/output/${ASCEND_DEVICE_ID}/train_${ASCEND_DEVICE_ID}.log 2>&1 &
+done
+wait
+
+#恢复参数
+sed -i "s|$data_path|../datasets/few_shot_data|g" data/mini_imagenet.py
+#sed -i "s|epochs = 1|epochs = 20|g" examples/cats_and_dogs.py
+
+
+#conda deactivate
+#训练结束时间,不需要修改
+end_time=$(date +%s)
+e2e_time=$(( $end_time - $start_time ))
+
+#结果打印,不需要修改
+echo "------------------ Final result ------------------"
+#输出性能FPS,需要模型审视修改
+time=`grep "time/step" $cur_path/output/${ASCEND_DEVICE_ID}/train_${ASCEND_DEVICE_ID}.log|awk -F ":" '{print $6}'|tail -n +3|awk '{sum+=$1} END {print"",sum/NR}'|sed s/[[:space:]]//g`
+FPS=`awk 'BEGIN{printf "%.2f\n",'${batch_size}'/'${time}'}'`
+
+
+#打印,不需要修改
+echo "Final Performance images/sec : $FPS"
+
+#输出训练精度,需要模型审视修改
+#train_accuracy=`grep eval_accuracy $cur_path/output/${ASCEND_DEVICE_ID}/train_${ASCEND_DEVICE_ID}.log|grep -v mlp_log|awk 'END {print $5}'| sed 's/,//g' |cut -c 1-5`
+
+train_accuracy=`grep "Validation Epoch" $cur_path/output/$ASCEND_DEVICE_ID/train_$ASCEND_DEVICE_ID.log|awk -F "Accuracy: " '{print $2}'|awk '{print $1}'|awk 'NR==1{max=$1;next}{max=max>$1?max:$1}END{print max}'`
+#打印,不需要修改
+echo "Final Train Accuracy : ${train_accuracy}"
+echo "E2E Training Duration sec : $e2e_time"
+
+#稳定性精度看护结果汇总
+#训练用例信息,不需要修改
+BatchSize=${batch_size}
+DeviceType=`uname -m`
+CaseName=${Network}_bs${BatchSize}_${RANK_SIZE}'p'_'acc'
+
+##获取性能数据
+#吞吐量,不需要修改
+ActualFPS=${FPS}
+#单迭代训练时长,不需要修改
+TrainingTime=`awk 'BEGIN{printf "%.2f\n",'${BatchSize}'*1000/'${FPS}'}'`
+
+#从train_$ASCEND_DEVICE_ID.log提取Loss到train_${CaseName}_loss.txt中,需要根据模型审视
+grep "time/step" $cur_path/output/$ASCEND_DEVICE_ID/train_$ASCEND_DEVICE_ID.log|awk -F "Loss:" '{print $2}'|awk '{print $1}' >> $cur_path/output/$ASCEND_DEVICE_ID/train_${CaseName}_loss.txt
+
+#最后一个迭代loss值,不需要修改
+ActualLoss=`awk 'END {print}' $cur_path/output/$ASCEND_DEVICE_ID/train_${CaseName}_loss.txt`
+
+#关键信息打印到${CaseName}.log中,不需要修改
+echo "Network = ${Network}" > $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "RankSize = ${RANK_SIZE}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "BatchSize = ${BatchSize}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "DeviceType = ${DeviceType}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "CaseName = ${CaseName}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "ActualFPS = ${ActualFPS}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "TrainingTime = ${TrainingTime}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "TrainAccuracy = ${train_accuracy}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "ActualLoss = ${ActualLoss}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "E2ETrainingTime = ${e2e_time}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
\ No newline at end of file
diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/test/train_performance_1p.sh b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/test/train_performance_1p.sh
new file mode 100644
index 0000000000000000000000000000000000000000..cb1773d3ad1e25bb66776f7273f0b0d2918ed743
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/test/train_performance_1p.sh
@@ -0,0 +1,200 @@
+#!/bin/bash
+
+#当前路径,不需要修改
+cur_path=`pwd`
+export ASCEND_SLOG_PRINT_TO_STDOUT=1
+export NPU_CALCULATE_DEVICE=$ASCEND_DEVICE_ID
+#集合通信参数,不需要修改
+
+export RANK_SIZE=1
+export JOB_ID=10087
+RANK_ID_START=0
+
+#export HCCL_WHITELIST_DISABLE=1
+#export MASTER_ADDR=127.0.0.1
+#export MASTER_PORT=23456
+#export RANK=0
+#export WORLD_SIZE=1
+
+#进入到conda环境
+#export PATH=/usr/local/python3.7.5/bin:/home/anaconda3/bin:$PATH
+#source activate py8
+
+
+# 数据集路径,保持为空,不需要修改
+data_path=""
+
+#基础参数,需要模型审视修改
+#网络名称,同目录名称
+Network="Prototype-Completion_ID2464_for_PyTorch"
+#训练epoch
+train_epochs=1
+#训练batch_size
+batch_size=128
+#训练step
+#train_steps=`expr 1281167 / ${batch_size}`
+#学习率
+learning_rate=0.495
+
+#TF2.X独有,不需要修改
+#export NPU_LOOP_SIZE=${train_steps}
+
+#维测参数,precision_mode需要模型审视修改
+precision_mode="allow_mix_precision"
+#维持参数,以下不需要修改
+over_dump=False
+data_dump_flag=False
+data_dump_step="10"
+profiling=False
+autotune=False
+
+# 帮助信息,不h需要修改
+if [[ $1 == --help || $1 == -h ]];then
+ echo"usage:./train_full_1p.sh "
+ echo " "
+ echo "parameter explain:
+ --precision_mode precision mode(allow_fp32_to_fp16/force_fp16/must_keep_origin_dtype/allow_mix_precision)
+ --over_dump if or not over detection, default is False
+ --data_dump_flag data dump flag, default is False
+ --data_dump_step data dump step, default is 10
+ --profiling if or not profiling for performance debug, default is False
+ --data_path source data of training
+ -h/--help show help message
+ "
+ exit 1
+fi
+
+#参数校验,不需要修改
+for para in $*
+do
+ if [[ $para == --precision_mode* ]];then
+ precision_mode=`echo ${para#*=}`
+elif [[ $para == --over_dump* ]];then
+ over_dump=`echo ${para#*=}`
+ over_dump_path=${cur_path}/output/overflow_dump
+ mkdir -p ${over_dump_path}
+ elif [[ $para == --data_dump_flag* ]];then
+ data_dump_flag=`echo ${para#*=}`
+ data_dump_path=${cur_path}/output/data_dump
+ mkdir -p ${data_dump_path}
+ elif [[ $para == --data_dump_step* ]];then
+ data_dump_step=`echo ${para#*=}`
+ elif [[ $para == --profiling* ]];then
+ profiling=`echo ${para#*=}`
+ profiling_dump_path=${cur_path}/output/profiling
+ mkdir -p ${profiling_dump_path}
+ elif [[ $para == --data_path* ]];then
+ data_path=`echo ${para#*=}`
+ fi
+done
+
+#校验是否传入data_path,不需要修改
+if [[ $data_path == "" ]];then
+ echo "[Error] para \"data_path\" must be confing"
+ exit 1
+fi
+
+#训练开始时间,不需要修改
+start_time=$(date +%s)
+
+#进入训练脚本目录,需要模型审视修改
+cd $cur_path/../
+
+
+sed -i "s|../datasets/few_shot_data|$data_path|g" data/mini_imagenet.py
+#sed -i "s|_C.TRAIN.EPOCHS = 300|_C.TRAIN.EPOCHS = 1|g" config.py
+#sed -i "s|EPOCHS = 75|EPOCHS = 1|g" config.py
+
+#cd pytorch-timm
+#yes|pip3 uninstall timm
+#python3 setup.py develop > $cur_path/../install.txt
+
+#mkdir -p checkpoints
+#mkdir -p /root/.cache/torch/hub/checkpoints
+#cp $data_path/fcn_* /root/.cache/torch/hub/checkpoints
+
+for((RANK_ID=$RANK_ID_START;RANK_ID<$((RANK_SIZE+RANK_ID_START));RANK_ID++));
+do
+ #设置环境变量,不需要修改
+ echo "Device ID: $ASCEND_DEVICE_ID"
+ export RANK_ID=$RANK_ID
+
+
+
+ #创建DeviceID输出目录,不需要修改
+ if [ -d ${cur_path}/output/${ASCEND_DEVICE_ID} ];then
+ rm -rf ${cur_path}/output/${ASCEND_DEVICE_ID}
+ mkdir -p ${cur_path}/output/$ASCEND_DEVICE_ID/ckpt
+ else
+ mkdir -p ${cur_path}/output/$ASCEND_DEVICE_ID/ckpt
+ fi
+
+ #绑核,不需要绑核的模型删除,需要绑核的模型根据实际修改
+ #cpucount=`lscpu | grep "CPU(s):" | head -n 1 | awk '{print $2}'`
+ #cpustep=`expr $cpucount / 8`
+ #echo "taskset c steps:" $cpustep
+ #let a=RANK_ID*$cpustep
+ #let b=RANK_ID+1
+ #let c=b*$cpustep-1
+
+ #执行训练脚本,以下传参不需要修改,其他需要模型审视修改
+ nohup python3 main.py --phase pretrain --gpu 0 --save-path "./experiments/meta_part_resnet12_mini" \
+--head CosineNet --network ResNet --pre_head LinearNet --dataset miniImageNet --num-epoch $train_epochs > ${cur_path}/output/${ASCEND_DEVICE_ID}/train_${ASCEND_DEVICE_ID}.log 2>&1 &
+done
+wait
+
+#恢复参数
+sed -i "s|$data_path|../datasets/few_shot_data|g" data/mini_imagenet.py
+#sed -i "s|epochs = 1|epochs = 20|g" examples/cats_and_dogs.py
+
+
+#conda deactivate
+#训练结束时间,不需要修改
+end_time=$(date +%s)
+e2e_time=$(( $end_time - $start_time ))
+
+#结果打印,不需要修改
+echo "------------------ Final result ------------------"
+#输出性能FPS,需要模型审视修改
+time=`grep "time/step" $cur_path/output/${ASCEND_DEVICE_ID}/train_${ASCEND_DEVICE_ID}.log|awk -F ":" '{print $6}'|tail -n +3|awk '{sum+=$1} END {print"",sum/NR}'|sed s/[[:space:]]//g`
+FPS=`awk 'BEGIN{printf "%.2f\n",'${batch_size}'/'${time}'}'`
+
+
+#打印,不需要修改
+echo "Final Performance images/sec : $FPS"
+
+#输出训练精度,需要模型审视修改
+#train_accuracy=`grep eval_accuracy $cur_path/output/${ASCEND_DEVICE_ID}/train_${ASCEND_DEVICE_ID}.log|grep -v mlp_log|awk 'END {print $5}'| sed 's/,//g' |cut -c 1-5`
+#打印,不需要修改
+#echo "Final Train Accuracy : ${train_accuracy}"
+#echo "E2E Training Duration sec : $e2e_time"
+
+#稳定性精度看护结果汇总
+#训练用例信息,不需要修改
+BatchSize=${batch_size}
+DeviceType=`uname -m`
+CaseName=${Network}_bs${BatchSize}_${RANK_SIZE}'p'_'perf'
+
+##获取性能数据
+#吞吐量,不需要修改
+ActualFPS=${FPS}
+#单迭代训练时长,不需要修改
+TrainingTime=`awk 'BEGIN{printf "%.2f\n",'${BatchSize}'*1000/'${FPS}'}'`
+
+#从train_$ASCEND_DEVICE_ID.log提取Loss到train_${CaseName}_loss.txt中,需要根据模型审视
+grep "time/step" $cur_path/output/$ASCEND_DEVICE_ID/train_$ASCEND_DEVICE_ID.log|awk -F "Loss:" '{print $2}'|awk '{print $1}' >> $cur_path/output/$ASCEND_DEVICE_ID/train_${CaseName}_loss.txt
+
+#最后一个迭代loss值,不需要修改
+ActualLoss=`awk 'END {print}' $cur_path/output/$ASCEND_DEVICE_ID/train_${CaseName}_loss.txt`
+
+#关键信息打印到${CaseName}.log中,不需要修改
+echo "Network = ${Network}" > $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "RankSize = ${RANK_SIZE}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "BatchSize = ${BatchSize}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "DeviceType = ${DeviceType}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "CaseName = ${CaseName}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "ActualFPS = ${ActualFPS}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "TrainingTime = ${TrainingTime}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+#echo "TrainAccuracy = ${train_accuracy}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "ActualLoss = ${ActualLoss}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
+echo "E2ETrainingTime = ${e2e_time}" >> $cur_path/output/$ASCEND_DEVICE_ID/${CaseName}.log
diff --git a/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/utils.py b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..8faf836b1c5628539cf8e90e64bcafcd83f56534
--- /dev/null
+++ b/PyTorch/dev/cv/image_classification/Prototype-Completion_ID2464_for_PyTorch/utils.py
@@ -0,0 +1,137 @@
+#
+# BSD 3-Clause License
+#
+# Copyright (c) 2017 xxxx
+# All rights reserved.
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# * Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# * Neither the name of the copyright holder nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+# ============================================================================
+#
+import os
+import time
+import pprint
+import torch
+import numpy as np
+import scipy.sparse as sp
+import torch.npu
+import os
+NPU_CALCULATE_DEVICE = 0
+if os.getenv('NPU_CALCULATE_DEVICE') and str.isdigit(os.getenv('NPU_CALCULATE_DEVICE')):
+ NPU_CALCULATE_DEVICE = int(os.getenv('NPU_CALCULATE_DEVICE'))
+if torch.npu.current_device() != NPU_CALCULATE_DEVICE:
+ torch.npu.set_device(f'npu:{NPU_CALCULATE_DEVICE}')
+
+def set_gpu(x):
+ os.environ['CUDA_VISIBLE_DEVICES'] = x
+ print('using gpu:', x)
+
+def check_dir(path):
+ '''
+ Create directory if it does not exist.
+ path: Path of directory.
+ '''
+ if not os.path.exists(path):
+ os.mkdir(path)
+
+def count_accuracy(logits, label):
+ pred = torch.argmax(logits, dim=1).view(-1)
+ label = label.view(-1)
+ accuracy = 100 * pred.eq(label).float().mean()
+ return accuracy
+
+class Timer():
+ def __init__(self):
+ self.o = time.time()
+
+ def measure(self, p=1):
+ x = (time.time() - self.o) / float(p)
+ x = int(x)
+ if x >= 3600:
+ return '{:.1f}h'.format(x / 3600)
+ if x >= 60:
+ return '{}m'.format(round(x / 60))
+ return '{}s'.format(x)
+
+def log(log_file_path, string):
+ '''
+ Write one line of log into screen and file.
+ log_file_path: Path of log file.
+ string: String to write in log file.
+ '''
+ with open(log_file_path, 'a+') as f:
+ f.write(string + '\n')
+ f.flush()
+ print(string)
+
+def pick_vectors(dic, wnids, is_tensor=False):
+ o = next(iter(dic.values()))
+ dim = len(o)
+ ret = []
+ for wnid in wnids:
+ v = dic.get(wnid)
+ if v is None:
+ if not is_tensor:
+ v = [0] * dim
+ else:
+ v = torch.zeros(dim)
+ ret.append(v)
+ if not is_tensor:
+ return torch.FloatTensor(ret)
+ else:
+ return torch.stack(ret)
+
+
+def l2_loss(a, b):
+ return ((a - b)**2).sum() / (len(a) * 2)
+
+
+def normt_spm(mx, method='in'):
+ if method == 'in':
+ mx = mx.transpose()
+ rowsum = np.array(mx.sum(1))
+ r_inv = np.power(rowsum, -1).flatten()
+ r_inv[np.isinf(r_inv)] = 0.
+ r_mat_inv = sp.diags(r_inv)
+ mx = r_mat_inv.dot(mx)
+ return mx
+
+ if method == 'sym':
+ rowsum = np.array(mx.sum(1))
+ r_inv = np.power(rowsum, -0.5).flatten()
+ r_inv[np.isinf(r_inv)] = 0.
+ r_mat_inv = sp.diags(r_inv)
+ mx = mx.dot(r_mat_inv).transpose().dot(r_mat_inv)
+ return mx
+
+
+def spm_to_tensor(sparse_mx):
+ sparse_mx = sparse_mx.tocoo().astype(np.float32)
+ indices = torch.from_numpy(np.vstack(
+ (sparse_mx.row, sparse_mx.col))).long()
+ values = torch.from_numpy(sparse_mx.data)
+ shape = torch.Size(sparse_mx.shape)
+ return torch.sparse.FloatTensor(indices, values, shape)
\ No newline at end of file