From 18a3264a11e1211b701ecfa2510123e72d6bcaf1 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BF=83=E6=80=9D=E7=BC=A0=E7=BB=B5?= <869083529@qq.com>
Date: Wed, 20 Jul 2022 02:40:02 +0000
Subject: [PATCH 01/16] =?UTF-8?q?=E6=96=B0=E5=BB=BA=20club?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
TensorFlow/contrib/cv/club/.keep | 0
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 TensorFlow/contrib/cv/club/.keep
diff --git a/TensorFlow/contrib/cv/club/.keep b/TensorFlow/contrib/cv/club/.keep
new file mode 100644
index 000000000..e69de29bb
--
Gitee
From d0fe68b8e05f20cc72897a5bbdeedf3518bee50a Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BF=83=E6=80=9D=E7=BC=A0=E7=BB=B5?= <869083529@qq.com>
Date: Wed, 20 Jul 2022 02:43:07 +0000
Subject: [PATCH 02/16] =?UTF-8?q?=E6=96=B0=E5=BB=BA=20CLUB=5Ftf=5Fwubo9826?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/.keep | 0
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/.keep
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/.keep b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/.keep
new file mode 100644
index 000000000..e69de29bb
--
Gitee
From b20c3d414e43cc873de6b385a6474a5446b49cdf Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BF=83=E6=80=9D=E7=BC=A0=E7=BB=B5?= <869083529@qq.com>
Date: Wed, 20 Jul 2022 09:37:57 +0000
Subject: [PATCH 03/16] add TensorFlow/contrib/cv/club/CLUB_tf_wubo9826.
---
TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA | 0
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA
new file mode 100644
index 000000000..e69de29bb
--
Gitee
From 427a92188623817f45d19f63ea43fbe59112c6a3 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BF=83=E6=80=9D=E7=BC=A0=E7=BB=B5?= <869083529@qq.com>
Date: Wed, 20 Jul 2022 09:38:34 +0000
Subject: [PATCH 04/16] readme
---
.../cv/club/CLUB_tf_wubo9826/Readme.md | 127 ++++++++++++++++++
1 file changed, 127 insertions(+)
create mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/Readme.md
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/Readme.md b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/Readme.md
new file mode 100644
index 000000000..9c4fb7ff1
--- /dev/null
+++ b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/Readme.md
@@ -0,0 +1,127 @@
+- [基本信息](#基本信息.md)
+- [概述](#概述.md)
+- [Requirements](#requirements)
+- [数据集](#数据集)
+- [代码及路径解释](#代码及路径解释)
+- [Running the code](#running-the-code)
+ - [run script](#run-command)
+ - [Training Log](#training-log)
+
+
基本信息
+
+**发布者(Publisher):Huawei**
+
+**应用领域(Application Domain): cv**
+
+**版本(Version):1.1**
+
+**修改时间(Modified) :2022.7.20**
+
+**框架(Framework):TensorFlow 1.15.0**
+
+**模型格式(Model Format):ckpt**
+
+**精度(Precision):Mixed**
+
+**处理器(Processor):昇腾910**
+
+**应用级别(Categories):Research**
+
+**描述(Description):基于TensorFlow框架的互信息对比对数比上界(CLUB)的图像分类网络训练代码**
+
+概述
+
+通过互信息最小化在领域适应(Domain Adaptation)实验证了互信息对比对数比上界(CLUB)的方法,提供了一个互信息和广泛的机器学习训练策略之间的联系。
+
+- 参考论文:
+
+ https://github.com/Linear95/CLUB
+
+ http://proceedings.mlr.press/v119/cheng20b/cheng20b.pdf
+
+## Requirements
+
+- python 3.7
+- tensorflow 1.15.0
+- numpy
+- scikit-learn
+- math
+- scikit-learn
+- opencv-python
+- scipy
+
+- Ascend: 1*Ascend 910 CPU: 24vCPUs 96GB
+
+```python
+镜像:ascend-share/5.1.rc1.alpha005_tensorflow-ascend910-cp37-euleros2.8-aarch64-training:1.15.0-21.0.2_0401
+```
+
+### 数据集
+
+ **SVHN 数据集**
+SVHN(Street View House Number)Dateset 来源于谷歌街景门牌号码,每张图片中包含一组 ‘0-9’ 的阿拉伯数字。获取链接:
+
+```
+链接:https://pan.baidu.com/s/1gvfAMQ-PAj-QXAz6q3g93Q
+提取码:j5lc
+```
+
+ **MNIST数据集**
+
+MNIST是一个手写体数字的图片数据集,该数据集来由美国国家标准与技术研究所(National Institute of Standards and Technology (NIST))发起整理,一共统计了来自250个不同的人手写数字图片,其中50%是高中生,50%来自人口普查局的工作人员。该数据集的收集目的是希望通过算法,实现对手写数字的识别。获取链接:
+
+```
+链接:https://pan.baidu.com/s/1-2rTGOUQo9O_aLDPGoyn5Q
+提取码:7sxl
+```
+
+### 代码及路径解释
+
+```
+TF-CLUB
+└─
+ ├─README.md
+ ├─MI_DA 模型代码
+ ├─imageloader.py 数据集加载脚本
+ ├─main_DANN.py 模型训练脚本
+ ├─MNISTModel_DANN.py 模型脚本
+ ├─utils.py 实体脚本文件
+```
+
+## Running the code
+
+### Run command
+
+#### GPU
+
+```python
+python main_DANN.py --data_path /path/to/data_folder/ --save_path /path/to/save_dir/ --source svhn --target mnist
+```
+
+#### ModelArts
+
+```
+框架:tensorflow1.15
+NPU: 1*Ascend 910 CPU: 24vCPUs 96GB
+镜像:ascend-share/5.1.rc1.alpha005_tensorflow-ascend910-cp37-euleros2.8-aarch64-training:1.15.0-21.0.2_0401
+OBS Path:/cann-id1254/
+Data Path in OBS:/cann-id1254/dataset/
+Debugger:不勾选
+```
+
+### Training log
+
+#### 精度结果
+
+- GPU(V100)结果
+
+ Source (svhn)
+ Target (Mnist)
+ p_acc: 0.9688
+
+- NPU结果
+
+ Source (svhn)
+ Target (Mnist)
+ p_acc: 0.9688
+
--
Gitee
From 495b2d11bcac3f9a53e421a30636e81adcdd3389 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BF=83=E6=80=9D=E7=BC=A0=E7=BB=B5?= <869083529@qq.com>
Date: Wed, 20 Jul 2022 09:38:57 +0000
Subject: [PATCH 05/16] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20Te?=
=?UTF-8?q?nsorFlow/contrib/cv/club/CLUB=5Ftf=5Fwubo9826/MI=5FDA?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA | 0
1 file changed, 0 insertions(+), 0 deletions(-)
delete mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA
deleted file mode 100644
index e69de29bb..000000000
--
Gitee
From 469e34d92bcb8b75bdfee86aeaf6dcc3a2859aca Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BF=83=E6=80=9D=E7=BC=A0=E7=BB=B5?= <869083529@qq.com>
Date: Wed, 20 Jul 2022 09:39:14 +0000
Subject: [PATCH 06/16] code
---
.../club/CLUB_tf_wubo9826/MNISTModel_DANN.py | 469 ++++++++++++++++++
.../cv/club/CLUB_tf_wubo9826/imageloader.py | 431 ++++++++++++++++
.../cv/club/CLUB_tf_wubo9826/main_DANN.py | 383 ++++++++++++++
.../contrib/cv/club/CLUB_tf_wubo9826/utils.py | 364 ++++++++++++++
4 files changed, 1647 insertions(+)
create mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MNISTModel_DANN.py
create mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/imageloader.py
create mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/main_DANN.py
create mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/utils.py
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MNISTModel_DANN.py b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MNISTModel_DANN.py
new file mode 100644
index 000000000..41ffd3c07
--- /dev/null
+++ b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MNISTModel_DANN.py
@@ -0,0 +1,469 @@
+from npu_bridge.npu_init import *
+import tensorflow as tf
+import utils
+import math
+import tensorflow.contrib.layers as layers
+#import keras.backend as K
+
+
+def leaky_relu(x, a=0.1):
+ return tf.maximum(x, a * x)
+
+def noise(x, phase=True, std=1.0):
+ eps = tf.random_normal(tf.shape(x), 0.0, std)
+ output = tf.where(phase, x + eps, x)
+ return output
+
+class MNISTModel_DANN(object):
+ """Simple MNIST domain adaptation model."""
+ def __init__(self, options):
+ self.reg_disc = options['reg_disc']
+ self.reg_con = options['reg_con']
+ self.reg_tgt = options['reg_tgt']
+ self.lr_g = options['lr_g']
+ self.lr_d = options['lr_d']
+ self.sample_type = tf.float32
+ self.num_labels = options['num_labels']
+ self.num_domains = options['num_domains']
+ self.num_targets = options['num_targets']
+ self.sample_shape = options['sample_shape']
+ self.ef_dim = options['ef_dim']
+ self.latent_dim = options['latent_dim']
+ self.batch_size = options['batch_size']
+ self.initializer = tf.contrib.layers.xavier_initializer()
+ # self.initializer = tf.truncated_normal_initializer(stddev=0.1)
+ self.X = tf.placeholder(tf.as_dtype(self.sample_type), [None] + list(self.sample_shape), name="input_X")
+ #print(self.X)
+ #self.X = tf.placeholder(tf.as_dtype(self.sample_type), [None]+[28,28,3], name="input_X")
+ #print(self.X)
+ self.y = tf.placeholder(tf.float32, [None, self.num_labels], name="input_labels")
+ #print(self.y)
+ #self.y = tf.placeholder(tf.float32, [self.batch_size,self.num_labels], name="input_labels")
+ self.domains = tf.placeholder(tf.float32, [None, self.num_domains], name="input_domains")
+ self.train = tf.placeholder(tf.bool, [], name = 'train')
+ self._build_model()
+ self._setup_train_ops()
+
+ # def feature_extractor(self, reuse = False):
+ # input_X = utils.normalize_images(self.X)
+ # with tf.variable_scope('feature_extractor_conv1',reuse = reuse):
+ # h_conv1 = layers.conv2d(input_X, self.ef_dim, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_pool1 = layers.max_pool2d(h_conv1, [2, 2], 2, padding='SAME')
+
+ # with tf.variable_scope('feature_extractor_conv2',reuse = reuse):
+ # h_conv2 = layers.conv2d(h_pool1, self.ef_dim * 2, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_pool2 = layers.max_pool2d(h_conv2, [2, 2], 2, padding='SAME')
+
+ # with tf.variable_scope('feature_extractor_conv3',reuse = reuse):
+ # h_conv3 = layers.conv2d(h_pool2, self.ef_dim * 4, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_pool3 = layers.max_pool2d(h_conv3, [2, 2], 2, padding='SAME')
+
+ # with tf.variable_scope('feature_extractor_fc1'):
+ # fc_input = layers.flatten(h_pool3)
+ # fc_1 = layers.fully_connected(inputs=fc_input, num_outputs=self.latent_dim,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+
+ # self.features = fc_1
+ # feature_shape = self.features.get_shape()
+ # self.feature_dim = feature_shape[1].value
+
+ # self.features_src = tf.slice(self.features, [0, 0], [self.batch_size, -1])
+ # self.features_for_prediction = tf.cond(self.train, lambda: tf.slice(self.features, [0, 0], [self.batch_size, -1]), lambda: self.features)
+
+ # def feature_extractor(self, reuse = False):
+ # input_X = utils.normalize_images(self.X)
+ # with tf.variable_scope('feature_extractor_conv1',reuse = reuse):
+ # h_conv1 = layers.conv2d(input_X, self.ef_dim, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_conv1 = layers.conv2d(h_conv1, self.ef_dim, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_conv1 = layers.max_pool2d(h_conv1, [2, 2], 2, padding='SAME')
+
+ # with tf.variable_scope('feature_extractor_conv2',reuse = reuse):
+ # h_conv2 = layers.conv2d(h_conv1, self.ef_dim * 2, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_conv2 = layers.conv2d(h_conv2, self.ef_dim * 2, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_conv2 = layers.max_pool2d(h_conv2, [2, 2], 2, padding='SAME')
+
+ # with tf.variable_scope('feature_extractor_conv3',reuse = reuse):
+ # h_conv3 = layers.conv2d(h_conv2, self.ef_dim * 4, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_conv3 = layers.conv2d(h_conv3, self.ef_dim * 4, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_conv3 = layers.max_pool2d(h_conv3, [2, 2], 2, padding='SAME')
+
+ # with tf.variable_scope('feature_extractor_fc1'):
+ # # fc_input = tf.nn.dropout(layers.flatten(h_conv3), keep_prob = 0.9)
+ # fc_input = layers.flatten(h_conv3)
+ # fc_1 = layers.fully_connected(inputs=fc_input, num_outputs=self.latent_dim,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+
+ # self.features = fc_1
+ # feature_shape = self.features.get_shape()
+ # self.feature_dim = feature_shape[1].value
+ # self.features_src = tf.slice(self.features, [0, 0], [self.batch_size, -1])
+ # self.features_for_prediction = tf.cond(self.train, lambda: tf.slice(self.features, [0, 0], [self.batch_size, -1]), lambda: self.features)
+
+
+ def feature_extractor_c(self, reuse = False):
+ training = tf.cond(self.train, lambda: True, lambda: False)
+ X = layers.instance_norm(self.X)
+ with tf.variable_scope('feature_extractor_c', reuse = reuse):
+ h_conv1 = layers.conv2d(self.X, self.ef_dim*3, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
+ h_conv1 = layers.conv2d(h_conv1, self.ef_dim*3, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
+ h_conv1 = layers.conv2d(h_conv1, self.ef_dim*3, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
+ h_conv1 = layers.max_pool2d(h_conv1, 2, 2, padding='SAME')
+ h_conv1 = noise(tf.layers.dropout(h_conv1, rate=0.5, training=training), phase=training)
+
+
+
+ h_conv2 = layers.conv2d(h_conv1, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
+ h_conv2 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
+ h_conv2 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
+ h_conv2 = layers.max_pool2d(h_conv2, 2, 2, padding='SAME')
+ h_conv2 = noise(tf.layers.dropout(h_conv2, rate=0.5, training=training), phase=training)
+
+
+
+ h_conv3 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
+ h_conv3 = layers.conv2d(h_conv3, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
+ h_conv3 = layers.conv2d(h_conv3, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
+ h_conv3 = tf.reduce_mean(h_conv3, axis=[1, 2])
+
+ self.features_c = h_conv3
+ feature_shape = self.features_c.get_shape()
+ self.feature_c_dim = feature_shape[1].value
+ self.features_c_src = tf.slice(self.features_c, [0, 0], [self.batch_size, -1])
+ self.features_c_for_prediction = tf.cond(self.train, lambda: tf.slice(self.features_c, [0, 0], [self.batch_size, -1]), lambda: self.features_c)
+
+ def feature_extractor_d(self, reuse = False):
+ training = tf.cond(self.train, lambda: True, lambda: False)
+ X = layers.instance_norm(self.X)
+ print('sss',self.X)
+ with tf.variable_scope('feature_extractor_d', reuse = reuse):
+ h_conv1 = layers.conv2d(self.X, self.ef_dim*3, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
+ h_conv1 = layers.conv2d(h_conv1, self.ef_dim*3, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
+ h_conv1 = layers.conv2d(h_conv1, self.ef_dim*3, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
+ h_conv1 = layers.max_pool2d(h_conv1, 2, 2, padding='SAME')
+ h_conv1 = noise(tf.layers.dropout(h_conv1, rate=0.5, training=training), phase=training)
+
+
+
+ h_conv2 = layers.conv2d(h_conv1, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
+ h_conv2 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
+ h_conv2 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
+ h_conv2 = layers.max_pool2d(h_conv2, 2, 2, padding='SAME')
+ h_conv2 = noise(tf.layers.dropout(h_conv2, rate=0.5, training=training), phase=training)
+
+
+
+ h_conv3 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
+ h_conv3 = layers.conv2d(h_conv3, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
+ h_conv3 = layers.conv2d(h_conv3, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
+ h_conv3 = tf.reduce_mean(h_conv3, axis=[1, 2])
+
+ self.features_d = h_conv3
+ # self.features_d_src = tf.slice(self.features_d, [0, 0], [self.batch_size, -1])
+ # self.features_d_for_prediction = tf.cond(self.train, lambda: tf.slice(self.features_d, [0, 0], [self.batch_size, -1]), lambda: self.features_d)
+
+ def mi_net(self, input_sample, reuse = False):
+ with tf.variable_scope('mi_net', reuse=reuse):
+ fc_1 = layers.fully_connected(inputs=input_sample, num_outputs=64, activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ fc_2 = layers.fully_connected(inputs=fc_1, num_outputs=1, activation_fn=None, weights_initializer=self.initializer)
+ return fc_2
+
+
+ def mine(self):
+ # tmp_1 = tf.random_shuffle(tf.range(self.batch_size))
+ # tmp_2 = tf.random_shuffle(tf.range(self.batch_size))
+ # shuffle_d_1 = tf.gather(tf.slice(tf.identity(self.features_d), [0, 0], [self.batch_size, -1]), tmp_1)
+ # shuffle_d_2 = tf.gather(tf.slice(tf.identity(self.features_d), [self.batch_size, 0], [self.batch_size, -1]), tmp_2)
+ # self.shuffle_d = tf.concat([shuffle_d_1, shuffle_d_2], axis = 0)
+ tmp = tf.random_shuffle(tf.range(self.batch_size*2))
+ self.shuffle_d = tf.gather(self.features_d, tmp)
+
+ input_0 = tf.concat([self.features_c,self.features_d], axis = -1)
+ input_1 = tf.concat([self.features_c,self.shuffle_d], axis = -1)
+
+ T_0 = self.mi_net(input_0)
+ T_1 = self.mi_net(input_1, reuse=True)
+
+ E_pos = math.log(2.) - tf.nn.softplus(-T_0)
+ E_neg = tf.nn.softplus(-T_1) + T_1 - math.log(2.)
+
+ # grad = tf.gradients(mi_l, [self.features_c, self.features_d, self.shuffle_d])
+ # pdb.set_trace()
+ # self.penalty = tf.reduce_mean(tf.square(tf.reduce_sum(tf.square(grad))-1.))
+ self.bound = tf.reduce_mean(E_pos - E_neg)
+
+
+ def club(self, reuse=False):
+ with tf.variable_scope('mi_net', reuse=reuse):
+ p_0 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ prediction = layers.fully_connected(inputs=p_0, num_outputs=int(self.features_d.shape[1]), activation_fn=None, weights_initializer=self.initializer)
+
+ p_1 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ prediction_1 = layers.fully_connected(inputs=p_1, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.tanh, weights_initializer=self.initializer)
+
+ mu = prediction
+ logvar = prediction_1
+
+ prediction_tile = tf.tile(tf.expand_dims(prediction, dim=1), tf.constant([1, self.batch_size*2, 1], tf.int32))
+ features_d_tile = tf.tile(tf.expand_dims(self.features_d, dim=0), tf.constant([self.batch_size*2, 1, 1], tf.int32))
+
+ positive = -(mu - self.features_d)**2/2./tf.exp(logvar)
+ negative = -tf.reduce_mean((features_d_tile-prediction_tile)**2, 1)/2./tf.exp(logvar)
+
+ # positive = -(prediction-self.features_d)**2
+ # negative = -tf.reduce_mean((features_d_tile-prediction_tile)**2, 1)
+
+ self.lld = tf.reduce_mean(tf.reduce_sum(positive, -1))
+ self.bound = tf.reduce_mean(tf.reduce_sum(positive, -1)-tf.reduce_sum(negative, -1))
+
+ def club_sample(self, reuse=False):
+ with tf.variable_scope('mi_net', reuse=reuse):
+ p_0 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ prediction = layers.fully_connected(inputs=p_0, num_outputs=int(self.features_d.shape[1]), activation_fn=None, weights_initializer=self.initializer)
+
+ p_1 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ prediction_1 = layers.fully_connected(inputs=p_1, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.tanh, weights_initializer=self.initializer)
+
+ mu = prediction
+ logvar = prediction_1
+
+ tmp = tf.random_shuffle(tf.range(self.batch_size*2))
+ self.shuffle_d = tf.gather(self.features_d, tmp)
+
+ positive = -(mu - self.features_d)**2/2./tf.exp(logvar)
+ negative = -(mu - self.shuffle_d)**2/2./tf.exp(logvar)
+
+ self.lld = tf.reduce_mean(tf.reduce_sum(positive, -1))
+ self.bound = tf.reduce_mean(tf.reduce_sum(positive, -1)-tf.reduce_sum(negative, -1))
+
+ def NWJ(self, reuse=False):
+ features_c_tile = tf.tile(tf.expand_dims(self.features_c, dim=0), tf.constant([self.batch_size*2, 1, 1], tf.int32))
+ features_d_tile = tf.tile(tf.expand_dims(self.features_d, dim=1), tf.constant([1, self.batch_size*2, 1], tf.int32))
+ input_0 = tf.concat([self.features_c, self.features_d], axis = -1)
+ input_1 = tf.concat([features_c_tile, features_d_tile], axis = -1)
+
+ T_0 = self.mi_net(input_0)
+ T_1 = self.mi_net(input_1, reuse=True) - 1.
+
+ self.bound = tf.reduce_mean(T_0) - tf.reduce_mean(tf.exp(tf.reduce_logsumexp(T_1, 1) - math.log(self.batch_size*2)))
+
+ def VUB(self, reuse=False):
+ with tf.variable_scope('mi_net', reuse=reuse):
+ p_0 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ prediction = layers.fully_connected(inputs=p_0, num_outputs=int(self.features_d.shape[1]), activation_fn=None, weights_initializer=self.initializer)
+
+ p_1 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ prediction_1 = layers.fully_connected(inputs=p_1, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.tanh, weights_initializer=self.initializer)
+
+ mu = prediction
+ logvar = prediction_1
+
+ self.lld = tf.reduce_mean(tf.reduce_sum(-(mu-self.features_d)**2 / tf.exp(logvar) - logvar, -1))
+ self.bound = 1. / 2. * tf.reduce_mean(mu**2 + tf.exp(logvar) - 1. - logvar)
+
+ def L1OutUB(self, reuse=False):
+ with tf.variable_scope('mi_net', reuse=reuse):
+ p_0 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ prediction = layers.fully_connected(inputs=p_0, num_outputs=int(self.features_d.shape[1]), activation_fn=None, weights_initializer=self.initializer)
+
+ p_1 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ prediction_1 = layers.fully_connected(inputs=p_1, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.tanh, weights_initializer=self.initializer)
+
+ mu = prediction
+ logvar = prediction_1
+
+ positive = tf.reduce_sum(-(mu - self.features_d)**2/2./tf.exp(logvar) - logvar/2., -1)
+
+ prediction_tile = tf.tile(tf.expand_dims(prediction, dim=1), tf.constant([1, self.batch_size*2, 1], tf.int32))
+ prediction_1_tile = tf.tile(tf.expand_dims(prediction_1, dim=1), tf.constant([1, self.batch_size*2, 1], tf.int32))
+ features_d_tile = tf.tile(tf.expand_dims(self.features_d, dim=0), tf.constant([self.batch_size*2, 1, 1], tf.int32))
+
+ all_probs = tf.reduce_sum(-(features_d_tile-prediction_tile)**2/2./tf.exp(prediction_1_tile) - prediction_1_tile/2., -1)
+ diag_mask = tf.diag([-20.]*self.batch_size*2)
+
+ negative = tf.reduce_logsumexp(all_probs + diag_mask, 0) - math.log(self.batch_size*2 - 1.)
+ self.bound = tf.reduce_mean(positive-negative)
+ self.lld = tf.reduce_mean(tf.reduce_sum(-(mu - self.features_d)**2/tf.exp(logvar) - logvar, -1))
+
+
+ def nce(self):
+
+ features_c_tile = tf.tile(tf.expand_dims(self.features_c, dim=0), tf.constant([self.batch_size*2, 1, 1], tf.int32))
+ features_d_tile = tf.tile(tf.expand_dims(self.features_d, dim=1), tf.constant([1, self.batch_size*2, 1], tf.int32))
+ input_0 = tf.concat([self.features_c, self.features_d], axis = -1)
+ input_1 = tf.concat([features_c_tile, features_d_tile], axis = -1)
+
+ T_0 = self.mi_net(input_0)
+ T_1 = tf.reduce_mean(self.mi_net(input_1, reuse=True), axis=1)
+
+ E_pos = math.log(2.) - tf.nn.softplus(-T_0)
+ E_neg = tf.nn.softplus(-T_1) + T_1 - math.log(2.)
+
+ self.bound = tf.reduce_mean(E_pos - E_neg)
+
+
+ def label_predictor(self):
+ # with tf.variable_scope('label_predictor_fc1'):
+ # fc_1 = layers.fully_connected(inputs=self.features_for_prediction, num_outputs=self.latent_dim,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ with tf.variable_scope('label_predictor_logits'):
+ logits = layers.fully_connected(inputs=self.features_c_for_prediction, num_outputs=self.num_labels,
+ activation_fn=None, weights_initializer=self.initializer)
+
+ self.y_pred = tf.nn.softmax(logits)
+ self.y_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = self.y))
+ self.y_acc = utils.predictor_accuracy(self.y_pred, self.y)
+
+
+ def domain_predictor(self, reuse = False):
+ with tf.variable_scope('domain_predictor_fc1', reuse = reuse):
+ fc_1 = layers.fully_connected(inputs=self.features_d, num_outputs=self.latent_dim,
+ activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ with tf.variable_scope('domain_predictor_logits', reuse = reuse):
+ self.d_logits = layers.fully_connected(inputs=fc_1, num_outputs=self.num_domains,
+ activation_fn=None, weights_initializer=self.initializer)
+
+
+ logits_real = tf.slice(self.d_logits, [0, 0], [self.batch_size, -1])
+ logits_fake = tf.slice(self.d_logits, [self.batch_size, 0], [self.batch_size * self.num_targets, -1])
+
+ label_real = tf.slice(self.domains, [0, 0], [self.batch_size, -1])
+ label_fake = tf.slice(self.domains, [self.batch_size, 0], [self.batch_size * self.num_targets, -1])
+ label_pseudo = tf.ones(label_fake.shape) - label_fake
+
+ self.d_pred = tf.nn.sigmoid(self.d_logits)
+ real_d_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = logits_real, labels = label_real))
+ fake_d_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = logits_fake, labels = label_fake))
+ self.d_loss = real_d_loss + self.reg_tgt * fake_d_loss
+ self.d_acc = utils.predictor_accuracy(self.d_pred, self.domains)
+
+ # def domain_test(self, reuse=False):
+ # with tf.variable_scope('domain_test_fc1', reuse = reuse):
+ # fc_1 = layers.fully_connected(inputs=self.features, num_outputs=self.latent_dim,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # with tf.variable_scope('domain_test_logits', reuse = reuse):
+ # d_logits = layers.fully_connected(inputs=fc_1, num_outputs=self.num_domains,
+ # activation_fn=None, weights_initializer=self.initializer)
+
+ # logits_real = tf.slice(d_logits, [0, 0], [self.batch_size, -1])
+ # logits_fake = tf.slice(d_logits, [self.batch_size, 0], [self.batch_size * self.num_targets, -1])
+
+ # self.test_pq = tf.nn.softmax(d_logits)
+
+ # label_real = tf.slice(self.domains, [0, 0], [self.batch_size, -1])
+ # label_fake = tf.slice(self.domains, [self.batch_size, 0], [self.batch_size * self.num_targets, -1])
+
+ # real_d_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = logits_real, labels = label_real))
+ # fake_d_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = logits_fake, labels = label_fake))
+ # self.d_test = real_d_loss + fake_d_loss
+
+ # def distance(self, a, b):
+ # a_matrix = tf.tile(tf.expand_dims(a, 0), [a.shape[0], 1, 1])
+ # b_matrix = tf.tile(tf.expand_dims(b, 0), [b.shape[0], 1, 1])
+ # b_matrix = tf.transpose(b_matrix, [1,0,2])
+ # distance = K.sqrt(K.maximum(K.sum(K.square(a_matrix - b_matrix), axis=2), K.epsilon()))
+ # return distance
+
+ # def calculate_mask(self, idx, idx_2):
+ # idx_matrix = tf.tile(tf.expand_dims(idx, 0), [idx.shape[0], 1])
+ # idx_2_matrix = tf.tile(tf.expand_dims(idx_2, 0), [idx_2.shape[0], 1])
+ # idx_2_transpose = tf.transpose(idx_2_matrix, [1,0])
+ # mask = tf.cast(tf.equal(idx_matrix, idx_2_transpose), tf.float32)
+ # return mask
+
+ # def contrastive_loss(self, y_true, y_pred, hinge=1.0):
+ # margin = hinge
+ # sqaure_pred = K.square(y_pred)
+ # margin_square = K.square(K.maximum(margin - y_pred, 0))
+ # return K.mean(y_true * sqaure_pred + (1 - y_true) * margin_square)
+
+ def _build_model(self):
+ self.feature_extractor_c()
+ self.feature_extractor_d()
+ self.label_predictor()
+ self.domain_predictor()
+ self.club()
+ # self.domain_test()
+
+ # self.src_pred = tf.argmax(tf.slice(self.y_pred, [0, 0], [self.batch_size, -1]), axis=-1)
+ # self.distance = self.distance(self.features_src, self.features_src)
+ # self.batch_compare = self.calculate_mask(self.src_pred, self.src_pred)
+ # self.con_loss = self.contrastive_loss(self.batch_compare, self.distance)
+
+ self.context_loss = self.y_loss + 0.1 * self.bound# + self.reg_con*self.con_loss
+ self.domain_loss = self.d_loss
+
+ def _setup_train_ops(self):
+ context_vars = utils.vars_from_scopes(['feature_extractor_c', 'label_predictor'])
+ domain_vars = utils.vars_from_scopes(['feature_extractor_d', 'domain_predictor'])
+ mi_vars = utils.vars_from_scopes(['mi_net'])
+ self.domain_test_vars = utils.vars_from_scopes(['domain_test'])
+
+ # 源代码 没加LossScale
+ self.train_context_ops = tf.train.AdamOptimizer(self.lr_g,0.5).minimize(self.context_loss, var_list = context_vars)
+ self.train_domain_ops = tf.train.AdamOptimizer(self.lr_d,0.5).minimize(self.domain_loss, var_list = domain_vars)
+ self.train_mi_ops = tf.train.AdamOptimizer(self.lr_d,0.5).minimize(-self.lld, var_list = mi_vars)
+ # self.test_domain_ops = tf.train.AdamOptimizer(self.lr_d,0.5).minimize(self.d_test, var_list = self.domain_test_vars)
+
+ # # 添加LossScale后
+ # trainContextOps = tf.train.AdamOptimizer(self.lr_g,0.5)
+ # trainDomainOps = tf.train.AdamOptimizer(self.lr_d,0.5)
+ # trainMiOps = tf.train.AdamOptimizer(self.lr_d,0.5)
+ #
+ # loss_scale_manager1 = ExponentialUpdateLossScaleManager(init_loss_scale=2**32, incr_every_n_steps=1000, decr_every_n_nan_or_inf=2, decr_ratio=0.5)
+ # self.train_context_ops = NPULossScaleOptimizer(trainContextOps,loss_scale_manager1).minimize(self.context_loss, var_list = context_vars)
+ #
+ # loss_scale_manager2 = ExponentialUpdateLossScaleManager(init_loss_scale=2 ** 32, incr_every_n_steps=1000, decr_every_n_nan_or_inf=2, decr_ratio=0.5)
+ # self.train_domain_ops = NPULossScaleOptimizer(trainDomainOps,loss_scale_manager2).minimize(self.domain_loss, var_list = domain_vars)
+ #
+ # loss_scale_manager3 = ExponentialUpdateLossScaleManager(init_loss_scale=2 ** 32, incr_every_n_steps=1000, decr_every_n_nan_or_inf=2, decr_ratio=0.5)
+ # self.train_mi_ops = NPULossScaleOptimizer(trainMiOps,loss_scale_manager3).minimize(-self.lld, var_list = mi_vars)
+
+
+
+
+
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/imageloader.py b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/imageloader.py
new file mode 100644
index 000000000..0980af08d
--- /dev/null
+++ b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/imageloader.py
@@ -0,0 +1,431 @@
+from npu_bridge.npu_init import *
+import numpy as np
+import pickle
+from tensorflow.examples.tutorials.mnist import input_data
+import scipy.io
+from utils import to_one_hot
+import pdb
+import cv2
+import os
+#from scipy.misc import imsave
+
+def load_datasets(data_dir = './', sets={'mnist':1, 'svhn':1, 'mnistm':1, 'usps':1}):
+ datasets = {}
+ for key in sets.keys():
+ datasets[key] = {}
+ if sets.get('mnist'): #原代码是sets.has_key('mnist'),不支持python3.X
+ mnist = input_data.read_data_sets(data_dir + 'MNIST_data', one_hot=True)
+ mnist_train = (mnist.train.images.reshape(55000, 28, 28, 1) * 255).astype(np.uint8)
+ mnist_train = np.concatenate([mnist_train, mnist_train, mnist_train], 3)
+
+ # mnist_inv = mnist_train * (-1) + 255
+ # mnist_train = np.concatenate([mnist_train, mnist_inv])
+ mnist_test = (mnist.test.images.reshape(10000, 28, 28, 1) * 255).astype(np.uint8)
+ mnist_test = np.concatenate([mnist_test, mnist_test, mnist_test], 3)
+ mnist_valid = (mnist.validation.images.reshape(5000, 28, 28, 1) * 255).astype(np.uint8)
+ mnist_valid = np.concatenate([mnist_valid, mnist_valid, mnist_valid], 3)
+ # dataset['mnist']['train'] = {'images': mnist_train, 'labels': np.concatenate([mnist.train.labels, mnist.train.labels])}
+ datasets['mnist']['train'] = {'images': mnist_train, 'labels': mnist.train.labels}
+ datasets['mnist']['test'] = {'images': mnist_test, 'labels': mnist.test.labels}
+ datasets['mnist']['valid'] = {'images': mnist_valid, 'labels': mnist.validation.labels}
+
+
+ if sets.get('mnist32'):
+ mnist = input_data.read_data_sets(data_dir + 'MNIST_data', one_hot=True)
+ mnist_train = (mnist.train.images.reshape(55000, 28, 28, 1) * 255).astype(np.uint8)
+ mnist_train = np.concatenate([mnist_train, mnist_train, mnist_train], 3)
+ mnist_test = (mnist.test.images.reshape(10000, 28, 28, 1) * 255).astype(np.uint8)
+ mnist_test = np.concatenate([mnist_test, mnist_test, mnist_test], 3)
+ mnist_valid = (mnist.validation.images.reshape(5000, 28, 28, 1) * 255).astype(np.uint8)
+ mnist_valid = np.concatenate([mnist_valid, mnist_valid, mnist_valid], 3)
+
+ mnist_train = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in mnist_train]
+ mnist_train = np.concatenate(mnist_train)
+ mnist_test = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in mnist_test]
+ mnist_test = np.concatenate(mnist_test)
+ mnist_valid = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in mnist_valid]
+ mnist_valid = np.concatenate(mnist_valid)
+
+ datasets['mnist32']['train'] = {'images': mnist_train, 'labels': mnist.train.labels}
+ datasets['mnist32']['test'] = {'images': mnist_test, 'labels': mnist.test.labels}
+ datasets['mnist32']['valid'] = {'images': mnist_valid, 'labels': mnist.validation.labels}
+
+ # if sets.has_key('svhn'):
+ # svhn = scipy.io.loadmat(data_dir + 'SVHN/svhn.mat')
+ # svhn_train = svhn['train'].astype(np.uint8)
+ # svhn_labtrain = svhn['labtrain'].astype(np.int32)
+ # svhn_valid = svhn['val'].astype(np.uint8)
+ # svhn_labval= svhn['labval'].astype(np.int32)
+ # svhn_test = svhn['test'].astype(np.uint8)
+ # svhn_labtest =svhn['labtest'].astype(np.int32)
+ # dataset['svhn']['train'] = {'images': svhn_train, 'labels': svhn_labtrain}
+ # dataset['svhn']['test'] = {'images': svhn_test, 'labels': svhn_labtest}
+ # dataset['svhn']['valid'] = {'images': svhn_valid, 'labels': svhn_labval}
+
+ if sets.get('svhn'):
+ svhn_train = scipy.io.loadmat(data_dir + 'SVHN/train_32x32.mat')
+ svhn_train_data = svhn_train['X'].transpose((3,0,1,2)).astype(np.uint8)
+
+ svhn_train_label = svhn_train['y'] + 1
+ svhn_train_label[svhn_train_label > 10] = 1
+ svhn_train_label = to_one_hot(svhn_train_label)
+
+ svhn_valid_data = svhn_train_data[-5000:]
+ svhn_train_data = svhn_train_data[:-5000]
+
+ svhn_valid_label = svhn_train_label[-5000:]
+ svhn_train_label = svhn_train_label[:-5000]
+
+ svhn_test = scipy.io.loadmat(data_dir + 'SVHN/test_32x32.mat')
+ svhn_test_data = svhn_test['X'].transpose((3,0,1,2)).astype(np.uint8)
+
+ svhn_test_label = svhn_test['y'] + 1
+ svhn_test_label[svhn_test_label > 10] = 1
+ svhn_test_label = to_one_hot(svhn_test_label)
+
+ # svhn_train_data = [np.expand_dims(cv2.resize(x, dsize=(28,28)), 0) for x in svhn_train_data]
+ # svhn_train_data = np.concatenate(svhn_train_data)
+ # svhn_test_data = [np.expand_dims(cv2.resize(x, dsize=(28,28)), 0) for x in svhn_test_data]
+ # svhn_test_data = np.concatenate(svhn_test_data)
+ # svhn_valid_data = [np.expand_dims(cv2.resize(x, dsize=(28,28)), 0) for x in svhn_valid_data]
+ # svhn_valid_data = np.concatenate(svhn_valid_data)
+
+ svhn_train_data = svhn_train_data[:,2:30,2:30,:]
+ svhn_test_data = svhn_test_data[:,2:30,2:30,:]
+ svhn_valid_data = svhn_valid_data[:,2:30,2:30,:]
+
+
+
+ datasets['svhn']['train'] = {'images': svhn_train_data, 'labels': svhn_train_label}
+ datasets['svhn']['test'] = {'images': svhn_test_data, 'labels': svhn_test_label}
+ datasets['svhn']['valid'] = {'images': svhn_valid_data, 'labels': svhn_valid_label}
+
+ if sets.get('mnistm'):
+ if 'mnist' not in locals():
+ mnist = input_data.read_data_sets(data_dir + 'MNIST_data', one_hot=True)
+ mnistm = pickle.load(open(data_dir + 'MNISTM/mnistm_data.pkl', 'rb'))
+ mnistm_train = mnistm['train']
+ mnistm_test = mnistm['test']
+ mnistm_valid = mnistm['valid']
+
+ datasets['mnistm']['train'] = {'images': mnistm_train, 'labels': mnist.train.labels}
+ datasets['mnistm']['test'] = {'images': mnistm_test, 'labels': mnist.test.labels}
+ datasets['mnistm']['valid'] = {'images': mnistm_valid, 'labels': mnist.validation.labels}
+ if sets.get('usps'):
+ usps_file = open(data_dir + 'USPS/usps_28x28.pkl', 'rb')
+ usps = pickle.load(usps_file)
+ n = 5104
+ usps_train = (usps[0][0][:n].reshape(-1,28,28,1)*255.).astype('uint8')
+ usps_train = np.concatenate([usps_train, usps_train, usps_train], 3)
+ usps_valid = (usps[0][0][n:].reshape(-1,28,28,1)*255.).astype('uint8')
+ usps_valid = np.concatenate([usps_valid, usps_valid, usps_valid], 3)
+ usps_test = (usps[1][0].reshape(-1,28,28,1)*255.).astype('uint8')
+ usps_test = np.concatenate([usps_test, usps_test, usps_test], 3)
+ usps_images = (np.concatenate([usps[0][0], usps[1][0]]).reshape(-1, 28, 28, 1) * 255.).astype(np.uint8)
+ usps_images = np.concatenate([usps_images, usps_images, usps_images], 3)
+
+ datasets['usps']['train'] = {'images': usps_train, 'labels': to_one_hot(usps[0][1][:n])}
+ datasets['usps']['test'] = {'images': usps_test, 'labels': to_one_hot(usps[1][1])}
+ datasets['usps']['valid'] = {'images': usps_valid, 'labels': to_one_hot(usps[0][1][n:])}
+
+ if sets.get('cifar'):
+ batch_1 = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/data_batch_1.mat')
+ batch_2 = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/data_batch_2.mat')
+ batch_3 = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/data_batch_3.mat')
+ batch_4 = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/data_batch_4.mat')
+ batch_5 = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/data_batch_5.mat')
+ batch_test = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/test_batch.mat')
+
+ train_batch = np.concatenate([batch_1['data'], batch_2['data'], batch_3['data'],
+ batch_4['data'], batch_5['data']])
+ train_label = np.concatenate([batch_1['labels'], batch_2['labels'], batch_3['labels'],
+ batch_4['labels'], batch_5['labels']])
+
+ cifar_train_data = np.reshape(train_batch, [-1, 3, 32, 32]).transpose((0,2,3,1))
+ cifar_train_label = to_one_hot(np.squeeze(train_label))
+
+ cifar_train_data_reduce = cifar_train_data[cifar_train_label[:,6]==0]
+ cifar_train_label_tmp = cifar_train_label[cifar_train_label[:,6]==0]
+ cifar_train_label_reduce = np.concatenate([cifar_train_label_tmp[:,:6], cifar_train_label_tmp[:,7:]], axis=1)
+
+ # cifar_valid_data = cifar_train_data[-5000:]
+ # cifar_train_data = cifar_train_data[:-5000]
+
+ # cifar_valid_label = cifar_train_label[-5000:]
+ # cifar_train_label = cifar_train_label[:-5000]
+
+ cifar_valid_data_reduce = cifar_train_data_reduce[-5000:]
+ cifar_train_data_reduce = cifar_train_data_reduce[:-5000]
+
+ cifar_valid_label_reduce = cifar_train_label_reduce[-5000:]
+ cifar_train_label_reduce = cifar_train_label_reduce[:-5000]
+
+ cifar_test_data = np.reshape(batch_test['data'], [-1, 3, 32, 32]).transpose((0,2,3,1))
+ cifar_test_label = to_one_hot(np.squeeze(batch_test['labels']))
+
+ cifar_test_data_reduce = cifar_test_data[cifar_test_label[:,6]==0]
+ cifar_test_label_tmp = cifar_test_label[cifar_test_label[:,6]==0]
+ cifar_test_label_reduce = np.concatenate([cifar_test_label_tmp[:,:6], cifar_test_label_tmp[:,7:]], axis=1)
+
+ datasets['cifar']['train'] = {'images': cifar_train_data_reduce, 'labels': cifar_train_label_reduce}
+ datasets['cifar']['test'] = {'images': cifar_test_data_reduce, 'labels': cifar_test_label_reduce}
+ datasets['cifar']['valid'] = {'images': cifar_valid_data_reduce, 'labels': cifar_valid_label_reduce}
+
+ if sets.get('stl'):
+ stl_train = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/stl10_matlab/train.mat')
+ stl_train_data = np.reshape(stl_train['X'], [-1, 3, 96, 96]).transpose((0,3,2,1))
+
+ stl_train_label = np.squeeze(stl_train['y']-1)
+
+ stl_train_label_tmp = np.zeros([stl_train_label.shape[0], 10])
+
+ stl_train_label_tmp[stl_train_label==0,0]=1.
+ stl_train_label_tmp[stl_train_label==1,2]=1.
+ stl_train_label_tmp[stl_train_label==2,1]=1.
+ stl_train_label_tmp[stl_train_label==3,3]=1.
+ stl_train_label_tmp[stl_train_label==4,4]=1.
+ stl_train_label_tmp[stl_train_label==5,5]=1.
+ stl_train_label_tmp[stl_train_label==6,7]=1.
+ stl_train_label_tmp[stl_train_label==7,6]=1.
+ stl_train_label_tmp[stl_train_label==8,8]=1.
+ stl_train_label_tmp[stl_train_label==9,9]=1.
+
+
+ stl_train_data_reduce = stl_train_data[stl_train_label_tmp[:,6]==0]
+ stl_train_label_tmp = stl_train_label_tmp[stl_train_label_tmp[:,6]==0]
+ stl_train_label_reduce = np.concatenate([stl_train_label_tmp[:,:6], stl_train_label_tmp[:,7:]], axis=1)
+
+
+ stl_test = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/stl10_matlab/test.mat')
+ stl_test_data = np.reshape(stl_test['X'], [-1, 3, 96, 96]).transpose((0,3,2,1))
+
+ stl_test_label = np.squeeze(stl_test['y']-1)
+
+ stl_test_label_tmp = np.zeros([stl_test_label.shape[0], 10])
+
+ stl_test_label_tmp[stl_test_label==0,0]=1.
+ stl_test_label_tmp[stl_test_label==1,2]=1.
+ stl_test_label_tmp[stl_test_label==2,1]=1.
+ stl_test_label_tmp[stl_test_label==3,3]=1.
+ stl_test_label_tmp[stl_test_label==4,4]=1.
+ stl_test_label_tmp[stl_test_label==5,5]=1.
+ stl_test_label_tmp[stl_test_label==6,7]=1.
+ stl_test_label_tmp[stl_test_label==7,6]=1.
+ stl_test_label_tmp[stl_test_label==8,8]=1.
+ stl_test_label_tmp[stl_test_label==9,9]=1.
+
+ stl_test_data_reduce = stl_test_data[stl_test_label_tmp[:,6]==0]
+ stl_test_label_tmp = stl_test_label_tmp[stl_test_label_tmp[:,6]==0]
+ stl_test_label_reduce = np.concatenate([stl_test_label_tmp[:,:6], stl_test_label_tmp[:,7:]], axis=1)
+
+
+ stl_valid_data_reduce = stl_train_data_reduce[-500:]
+ stl_train_data_reduce = stl_train_data_reduce[:-500]
+
+ stl_valid_label_reduce = stl_train_label_reduce[-500:]
+ stl_train_label_reduce = stl_train_label_reduce[:-500]
+
+ stl_train_data_reduce = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in stl_train_data_reduce]
+ stl_train_data_reduce = np.concatenate(stl_train_data_reduce)
+ stl_test_data_reduce = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in stl_test_data_reduce]
+ stl_test_data_reduce = np.concatenate(stl_test_data_reduce)
+ stl_valid_data_reduce = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in stl_valid_data_reduce]
+ stl_valid_data_reduce = np.concatenate(stl_valid_data_reduce)
+
+ datasets['stl']['train'] = {'images': stl_train_data_reduce, 'labels': stl_train_label_reduce}
+ datasets['stl']['test'] = {'images': stl_test_data_reduce, 'labels': stl_test_label_reduce}
+ datasets['stl']['valid'] = {'images': stl_valid_data_reduce, 'labels': stl_valid_label_reduce}
+
+ return datasets
+
+
+def save_dataset(datasets,save_path = 'imp./save_datasets/'):
+ if not os.path.exists(save_path):
+ os.mkdir(save_path)
+ for key in datasets.keys():
+ train = datasets[key]['train']['images']
+ valid = datasets[key]['valid']['images']
+ test = datasets[key]['test']['images']
+ labtrain = datasets[key]['train']['labels']
+ labval = datasets[key]['valid']['labels']
+ labtest = datasets[key]['test']['labels']
+ scipy.io.savemat(save_path + key + '.mat',{'train':train, 'val':valid,'test':test,'labtrain':labtrain,'labval':labval,'labtest':labtest})
+ return 0
+
+def sets_concatenate(datasets, sets):
+ N_train = 0
+ N_valid = 0
+ N_test = 0
+
+ for key in sets:
+ label_len = datasets[key]['train']['labels'].shape[1]
+ N_train = N_train + datasets[key]['train']['images'].shape[0]
+ N_valid = N_valid + datasets[key]['valid']['images'].shape[0]
+ N_test = N_test + datasets[key]['test']['images'].shape[0]
+ S = datasets[key]['train']['images'].shape[1]
+ train = {'images': np.zeros((N_train,S,S,3)).astype(np.float32),'labels':np.zeros((N_train,label_len)).astype('float32'),'domains':np.zeros((N_train,)).astype('float32')}
+ valid = {'images': np.zeros((N_valid,S,S,3)).astype(np.float32),'labels':np.zeros((N_valid,label_len)).astype('float32'),'domains':np.zeros((N_valid,)).astype('float32')}
+ test = {'images': np.zeros((N_test,S,S,3)).astype(np.float32),'labels':np.zeros((N_test,label_len)).astype('float32'),'domains':np.zeros((N_test,)).astype('float32')}
+ srt = 0
+ edn = 0
+ for key in sets:
+ domain = sets[key]
+ srt = edn
+ edn = srt + datasets[key]['train']['images'].shape[0]
+ train['images'][srt:edn,:,:,:] = datasets[key]['train']['images']
+ train['labels'][srt:edn,:] = datasets[key]['train']['labels']
+ train['domains'][srt:edn] = domain * np.ones((datasets[key]['train']['images'].shape[0],)).astype('float32')
+ srt = 0
+ edn = 0
+ for key in sets:
+ domain = sets[key]
+ srt = edn
+ edn = srt + datasets[key]['valid']['images'].shape[0]
+ valid['images'][srt:edn,:,:,:] = datasets[key]['valid']['images']
+ valid['labels'][srt:edn,:] = datasets[key]['valid']['labels']
+ valid['domains'][srt:edn] = domain * np.ones((datasets[key]['valid']['images'].shape[0],)).astype('float32')
+ srt = 0
+ edn = 0
+ for key in sets:
+ domain = sets[key]
+ srt = edn
+ edn = srt + datasets[key]['test']['images'].shape[0]
+ test['images'][srt:edn,:,:,:] = datasets[key]['test']['images']
+ test['labels'][srt:edn,:] = datasets[key]['test']['labels']
+ test['domains'][srt:edn] = domain * np.ones((datasets[key]['test']['images'].shape[0],)).astype('float32')
+ return train, valid, test
+
+def source_target(datasets, sources, targets, unify_source = False):
+ N1 = len(sources.keys())
+ N_domain = N1 + len(targets.keys())
+ domain_idx = 0
+ for key in sources.keys():
+ sources[key] = domain_idx
+ domain_idx = domain_idx + 1
+ for key in targets.keys():
+ targets[key] = domain_idx
+ domain_idx = domain_idx + 1
+
+ source_train, source_valid, source_test = sets_concatenate(datasets, sources)
+ target_train, target_valid, target_test = sets_concatenate(datasets, targets)
+
+ if unify_source:
+ source_train['domains'] = to_one_hot(0 * source_train['domains'], 2)
+ source_valid['domains'] = to_one_hot(0 * source_valid['domains'], 2)
+ source_test['domains'] = to_one_hot(0 * source_test['domains'], 2)
+ target_train['domains'] = to_one_hot(0 * target_train['domains'] + 1, 2)
+ target_valid['domains'] = to_one_hot(0 * target_valid['domains'] + 1, 2)
+ target_test['domains'] = to_one_hot(0 * target_test['domains'] + 1, 2)
+ else:
+ source_train['domains'] = to_one_hot(source_train['domains'], N_domain)
+ source_valid['domains'] = to_one_hot(source_valid['domains'], N_domain)
+ source_test['domains'] = to_one_hot(source_test['domains'], N_domain)
+ target_train['domains'] = to_one_hot(target_train['domains'], N_domain)
+ target_valid['domains'] = to_one_hot(target_valid['domains'], N_domain)
+ target_test['domains'] = to_one_hot(target_test['domains'], N_domain)
+ return source_train, source_valid, source_test, target_train, target_valid, target_test
+
+def normalize(data):
+ image_mean = data - np.expand_dims(np.expand_dims(data.mean((1,2)),1),1)
+ image_std = np.sqrt((image_mean**2).mean((1,2))+1e-8)
+ return image_mean / np.expand_dims(np.expand_dims(image_std,1),1)
+
+def normalize_dataset(datasets, t = 'norm'):
+ if t is 'mean':
+ temp_data = []
+ for key in datasets.keys():
+ temp_data.append(datasets[key]['train']['images'])
+ temp_data = np.concatenate(temp_data)
+ image_mean = temp_data.mean((0, 1, 2))
+ image_mean = image_mean.astype('float32')
+ for key in datasets.keys():
+ datasets[key]['train']['images'] = (datasets[key]['train']['images'].astype('float32') - image_mean)/255.
+ datasets[key]['valid']['images'] = (datasets[key]['valid']['images'].astype('float32') - image_mean)/255.
+ datasets[key]['test']['images'] = (datasets[key]['test']['images'].astype('float32') - image_mean)/255.
+ elif t is 'standard':
+ for key in datasets.keys():
+ datasets[key]['train']['images'] = (datasets[key]['train']['images'].astype('float32'))/255.
+ datasets[key]['valid']['images'] = (datasets[key]['valid']['images'].astype('float32'))/255.
+ datasets[key]['test']['images'] = (datasets[key]['test']['images'].astype('float32'))/255.
+ elif t is 'none':
+ datasets = datasets
+ elif t is 'individual':
+ for key in datasets.keys():
+ temp_data = datasets[key]['train']['images']
+ image_mean = temp_data.mean((0, 1, 2))
+ image_mean = image_mean.astype('float32')
+ datasets[key]['train']['images'] = (datasets[key]['train']['images'].astype('float32') - image_mean)/255.
+ datasets[key]['valid']['images'] = (datasets[key]['valid']['images'].astype('float32') - image_mean)/255.
+ datasets[key]['test']['images'] = (datasets[key]['test']['images'].astype('float32') - image_mean)/255.
+ elif t is 'norm':
+ for key in datasets.keys():
+ if key =='mnist':
+ tmp_1 = datasets[key]['train']['images'][:(len(datasets[key]['train']['images']) // 2)]
+ tmp_2 = datasets[key]['train']['images'][(len(datasets[key]['train']['images']) // 2):]
+ datasets[key]['train']['images'] = np.concatenate([normalize(tmp_1),normalize(tmp_2)])
+ else:
+ datasets[key]['train']['images'] = normalize(datasets[key]['train']['images'])
+
+ datasets[key]['valid']['images'] = normalize(datasets[key]['valid']['images'])
+ datasets[key]['test']['images'] = normalize(datasets[key]['test']['images'])
+
+ return datasets
+
+def source_target_separate(datasets, sources, targets):
+ N1 = len(sources.keys())
+ N_domain = N1 + len(targets.keys())
+ domain_idx = 0
+ sets = {}
+ for key in sources.keys():
+ sources[key] = domain_idx
+ sets[key] = domain_idx
+ domain_idx = domain_idx + 1
+ for key in targets.keys():
+ targets[key] = domain_idx
+ sets[key] = domain_idx
+ domain_idx = domain_idx + 1
+ for key in datasets.keys():
+ datasets[key]['train']['domains'] = to_one_hot(sets[key] * np.ones((datasets[key]['train']['images'].shape[0],)).astype('float32'), N_domain)
+ datasets[key]['valid']['domains'] = to_one_hot(sets[key] * np.ones((datasets[key]['valid']['images'].shape[0],)).astype('float32'), N_domain)
+ datasets[key]['test']['domains'] = to_one_hot(sets[key] * np.ones((datasets[key]['test']['images'].shape[0],)).astype('float32'), N_domain)
+ return datasets
+
+def source_target_separate_baseline(datasets, sources, targets):
+ N1 = len(sources.keys())
+ N_domain = N1 + len(targets.keys())
+ domain_idx = 0
+ domains = {}
+ for key in sources.keys():
+ sources[key] = domain_idx
+ domains[key] = domain_idx
+ domain_idx = domain_idx + 1
+ for key in targets.keys():
+ targets[key] = domain_idx
+ domains[key] = domain_idx
+ for key in datasets.keys():
+ datasets[key]['train']['domains'] = to_one_hot(domains[key] * np.ones((datasets[key]['train']['images'].shape[0],)).astype('float32'), 2)
+ datasets[key]['valid']['domains'] = to_one_hot(domains[key] * np.ones((datasets[key]['valid']['images'].shape[0],)).astype('float32'), 2)
+ datasets[key]['test']['domains'] = to_one_hot(domains[key] * np.ones((datasets[key]['test']['images'].shape[0],)).astype('float32'), 2)
+ return datasets
+
+
+# if __name__ == '__main__':
+# data_dir = '../dataset/folder/MNIST_data'
+# if not os.path.exists(data_dir):
+# os.makedirs(data_dir)
+# mnist = input_data.read_data_sets(data_dir, one_hot=True)
+# mnist_train = (mnist.train.images.reshape(55000, 28, 28, 1) * 255).astype(np.uint8)
+# mnist_train = np.concatenate([mnist_train, mnist_train, mnist_train], 3)
+#
+# # mnist_inv = mnist_train * (-1) + 255
+# # mnist_train = np.concatenate([mnist_train, mnist_inv])
+# mnist_test = (mnist.test.images.reshape(10000, 28, 28, 1) * 255).astype(np.uint8)
+# mnist_test = np.concatenate([mnist_test, mnist_test, mnist_test], 3)
+# mnist_valid = (mnist.validation.images.reshape(5000, 28, 28, 1) * 255).astype(np.uint8)
+# mnist_valid = np.concatenate([mnist_valid, mnist_valid, mnist_valid], 3)
+# # datasets['mnist']['train'] = {'images': mnist_train, 'labels': np.concatenate([mnist.train.labels, mnist.train.labels])}
+# print(mnist_test.shape)
+# print(mnist)
+
+
+
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/main_DANN.py b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/main_DANN.py
new file mode 100644
index 000000000..0f63749a7
--- /dev/null
+++ b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/main_DANN.py
@@ -0,0 +1,383 @@
+from npu_bridge.npu_init import *
+import os
+import argparse
+
+import numpy as np
+#from sklearn.manifold import TSNE
+#import scipy.io
+
+import tensorflow as tf
+#import tensorflow.contrib.slim as slim
+
+from MNISTModel_DANN import MNISTModel_DANN
+import imageloader as dataloader
+import utils
+from tqdm import tqdm
+
+import moxing as mox
+import precision_tool.tf_config as npu_tf_config
+import precision_tool.lib.config as CONFIG
+
+os.environ['CUDA_VISIBLE_DEVICES'] = '0' #0、1使用GPU的编号 此处由0改为1
+
+gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.25)
+
+parser = argparse.ArgumentParser(description="Domain Adaptation Training")
+parser.add_argument("--data_url", type=str, default="obs://cann-id1254/dataset/", help="path to dataset folder") #注意路径
+parser.add_argument("--train_url", type=str, default="./output")
+parser.add_argument("--save_path", type=str, default="obs://cann-id1254/savef/", help="path to save experiment output") #注意路径
+parser.add_argument("--source", type=str, default="svhn", help="specify source dataset")
+parser.add_argument("--target", type=str, default="mnist", help="specify target dataset")
+
+
+args, unparsed = parser.parse_known_args()
+
+
+data_path = args.data_url
+save_path = args.save_path
+batch_size = 64 # 64
+num_steps = 5000 # 原来是15000
+epsilon = 0.5
+M = 0.1
+num_test_steps = 5000
+valid_steps = 100
+
+# 在ModelArts容器创建数据存放目录
+data_dir = "/cache/dataset/"
+os.makedirs(data_dir)
+print("已创建!!!!!!!!!!!!11")
+
+savePath= "/cache/savePath/"
+os.makedirs(savePath)
+print("已创建!!!!!!!!!!!!11")
+
+model_dir = "/cache/result"
+os.makedirs(model_dir)
+
+# OBS数据拷贝到ModelArts容器内
+mox.file.copy_parallel(data_path, data_dir)
+
+#由于把桶中数据拷贝到modelArts上了 那么下面数据加载的地址也由data_path变为data_dir
+datasets = dataloader.load_datasets(data_dir,{args.source:1,args.target:1}) #data_path变为data_dir
+# d_train = datasets['mnist']['train'].get('images')
+# print("----------------------------------",len(d_train))
+# print("----------------------------------",len(d_train))
+# print("----------------------------------",len(d_train))
+# d1 = datasets.keys()
+# print(d1)
+# d_m = datasets.get('mnist')
+# print("mnist",d_m.keys())
+# d_m_train_d = d_m.get('train').get('images')
+# print("mnist train,test,valid",d_m_train_d.shape) #,d_m.get('test'),d_m.get('valid')
+# mnist_train_samples = d_m_train_d.shape[0]
+# end = mnist_train_samples // batch_size * batch_size
+# print("end sample ",end)
+# d_m_train_d = d_m_train_d[:end]
+# d_m_train_l =
+# print(d_m_train_d.shape)
+
+# d2 = datasets.get('svhn')
+# d3 = d2.get('train')
+# d4 = d3['images']
+# print(d2.keys())
+# print(d3.keys())
+# print(d4.shape)
+
+
+# dataset = dataloader.normalize_dataset(dataset)
+sources = {args.source:1}
+targets = {args.target:1}
+description = utils.description(sources, targets)
+source_train, source_valid, source_test, target_train, target_valid, target_test = dataloader.source_target(datasets, sources, targets, unify_source = True)
+
+options = {}
+options['sample_shape'] = (28,28,3)
+options['num_domains'] = 2
+options['num_targets'] = 1
+options['num_labels'] = 10
+options['batch_size'] = batch_size
+options['G_iter'] = 1
+options['D_iter'] = 1
+options['ef_dim'] = 32
+options['latent_dim'] = 128
+options['t_idx'] = np.argmax(target_test['domains'][0])
+options['source_num'] = batch_size
+options['target_num'] = batch_size
+options['reg_disc'] = 0.1
+options['reg_con'] = 0.1
+options['lr_g'] = 0.001
+options['lr_d'] = 0.001
+options['reg_tgt'] = 1.0
+description = utils.description(sources, targets)
+description = description + '_DANN_' + str(options['reg_disc'])
+
+
+tf.reset_default_graph()
+graph = tf.get_default_graph()
+model = MNISTModel_DANN(options)
+
+# 浮点异常检测
+# config = npu_config_proto(config_proto=tf.ConfigProto(gpu_options=gpu_options))
+# config = npu_tf_config.session_dump_config(config, action='overflow')
+# sess = tf.Session(graph = graph, config=config)
+
+# 关闭全部融合规则
+# config = npu_config_proto(config_proto=tf.ConfigProto(gpu_options=gpu_options))
+# config = npu_tf_config.session_dump_config(config, action='fusion_off')
+# sess = tf.Session(graph = graph,config=config)
+
+# ModelArts训练 创建临时的性能数据目录
+profiling_dir = "/cache/profiling"
+os.makedirs(profiling_dir)
+
+# 混合精度 单使用混合精度,精度达标与V100相同 Job:5-20-10-56
+config_proto = tf.ConfigProto(gpu_options=gpu_options)
+custom_op = config_proto.graph_options.rewrite_options.custom_optimizers.add()
+custom_op.name = 'NpuOptimizer'
+# 开启混合精度
+custom_op.parameter_map["precision_mode"].s = tf.compat.as_bytes("allow_mix_precision")
+# 开启profiling采集 Job:5-20-19-16
+custom_op.parameter_map["profiling_mode"].b = True
+# # 仅采集任务轨迹数据
+# custom_op.parameter_map["profiling_options"].s = tf.compat.as_bytes('{"output":"/cache/profiling","task_trace":"on"}')
+
+# 采集任务轨迹数据和迭代轨迹数据。可先仅采集任务轨迹数据,如果仍然无法分析到具体问题,可再采集迭代轨迹数据
+custom_op.parameter_map["profiling_options"].s = tf.compat.as_bytes('{"output":"/cache/profiling","task_trace":"on","training_trace":"on","fp_point":"resnet_model/conv2d/Conv2Dresnet_model/batch_normalization/FusedBatchNormV3_Reduce","bp_point":"gradients/AddN_70"}')
+
+config = npu_config_proto(config_proto=config_proto)
+sess = tf.Session(graph = graph,config=config)
+
+# 单使用LossScale Job:5-20-15-05
+
+# # 混合精度 + LossScale + 溢出数据采集 Job:5-20-16-56
+# # 1.混合精度
+# config_proto = tf.ConfigProto(gpu_options=gpu_options)
+# custom_op = config_proto.graph_options.rewrite_options.custom_optimizers.add()
+# custom_op.name = 'NpuOptimizer'
+# custom_op.parameter_map["precision_mode"].s = tf.compat.as_bytes("allow_mix_precision")
+# # 2.溢出数据采集
+# overflow_data_dir = "/cache/overflow"
+# os.makedirs(overflow_data_dir)
+# # dump_path:dump数据存放路径,该参数指定的目录需要在启动训练的环境上(容器或Host侧)提前创建且确保安装时配置的运行用户具有读写权限
+# custom_op.parameter_map["dump_path"].s = tf.compat.as_bytes(overflow_data_dir)
+# # enable_dump_debug:是否开启溢出检测功能
+# custom_op.parameter_map["enable_dump_debug"].b = True
+# # dump_debug_mode:溢出检测模式,取值:all/aicore_overflow/atomic_overflow
+# custom_op.parameter_map["dump_debug_mode"].s = tf.compat.as_bytes("all")
+# config = npu_config_proto(config_proto=config_proto)
+# sess = tf.Session(graph = graph,config=config)
+
+# 源代码迁移后的sess
+#sess = tf.Session(graph = graph, config=npu_config_proto(config_proto=tf.ConfigProto(gpu_options=gpu_options)))
+
+tf.global_variables_initializer().run(session = sess)
+
+record = []
+
+gen_source_batch = utils.batch_generator([source_train['images'],
+ source_train['labels'],
+ source_train['domains']], batch_size)
+
+print("gen_source_batch ",gen_source_batch)
+gen_target_batch = utils.batch_generator([target_train['images'],
+ target_train['labels'],
+ target_train['domains']], batch_size)
+print("gen_targe_batch ",gen_target_batch)
+gen_source_batch_valid = utils.batch_generator([np.concatenate([source_valid['images'], source_test['images']]),
+ np.concatenate([source_valid['labels'], source_test['labels']]),
+ np.concatenate([source_valid['domains'], source_test['domains']])],
+ batch_size)
+print("gen_source_batch_valid ",gen_source_batch_valid)
+gen_target_batch_valid = utils.batch_generator([np.concatenate([target_valid['images'], target_test['images']]),
+ np.concatenate([target_valid['labels'], target_test['labels']]),
+ np.concatenate([target_valid['domains'], target_test['domains']])],
+ batch_size)
+print("gen_target_batch_valid",gen_target_batch_valid)
+# source_data_valid = np.concatenate([source_valid['images'], source_test['images']])
+# target_data_valid = np.concatenate([target_valid['images'], target_test['images']])
+# source_label_valid = np.concatenate([source_valid['labels'], source_test['labels']])
+#
+# print("source_data_valid ",source_data_valid.shape)
+# print("target_data_valid ",target_data_valid.shape)
+# print("source_label_valid ",source_label_valid.shape)
+
+#save_path = './Result/' + description + '/'
+# print("save_path",save_path)
+
+# #创建保存文件夹
+# if not os.path.exists(save_path):
+# print("Creating ",save_path)
+# os.makedirs(save_path)
+
+# save_path = os.path.join(savePath, description)
+# print("save_path",save_path)
+# if not os.path.exists(save_path):
+# print("Creating!!!")
+# os.mkdir(save_path)
+
+def compute_MMD(H_fake, H_real, sigma_range=[5]):
+
+ min_len = min([len(H_real),len(H_fake)])
+ h_real = H_real[:min_len]
+ h_fake = H_fake[:min_len]
+
+ dividend = 1
+ dist_x, dist_y = h_fake/dividend, h_real/dividend
+ x_sq = np.expand_dims(np.sum(dist_x**2, axis=1), 1) # 64*1
+ y_sq = np.expand_dims(np.sum(dist_y**2, axis=1), 1) # 64*1
+ dist_x_T = np.transpose(dist_x)
+ dist_y_T = np.transpose(dist_y)
+ x_sq_T = np.transpose(x_sq)
+ y_sq_T = np.transpose(y_sq)
+
+ tempxx = -2*np.matmul(dist_x,dist_x_T) + x_sq + x_sq_T # (xi -xj)**2
+ tempxy = -2*np.matmul(dist_x,dist_y_T) + x_sq + y_sq_T # (xi -yj)**2
+ tempyy = -2*np.matmul(dist_y,dist_y_T) + y_sq + y_sq_T # (yi -yj)**2
+
+
+ for sigma in sigma_range:
+ kxx, kxy, kyy = 0, 0, 0
+ kxx += np.mean(np.exp(-tempxx/2/(sigma**2)))
+ kxy += np.mean(np.exp(-tempxy/2/(sigma**2)))
+ kyy += np.mean(np.exp(-tempyy/2/(sigma**2)))
+
+ gan_cost_g = np.sqrt(kxx + kyy - 2*kxy)
+ return gan_cost_g
+
+best_valid = -1
+best_acc = -1
+
+best_src_acc = -1
+best_src = -1
+
+best_bound_acc = -1
+best_bound = 100000
+
+best_iw_acc = -1
+best_iw = 100000
+
+best_ben_acc = -1
+best_ben = 100000
+
+#output_file = save_path+'acc.txt'
+output_file = savePath+description+"acc.txt"
+print('Training...')
+with open(output_file, 'w') as fout:
+ for i in tqdm(range(1, num_steps + 1)):
+
+ #print("step ",i)
+ # Adaptation param and learning rate schedule as described in the paper
+
+ X0, y0, d0 = gen_source_batch.__next__() # python2.x的g.next()函数已经更名为g.__next__()或next(g)也能达到相同效果。
+ X1, y1, d1 = gen_target_batch.__next__()
+
+ X = np.concatenate([X0, X1], axis = 0)
+ #print("Input X ",X.shape)
+ d = np.concatenate([d0, d1], axis = 0)
+ #print("Input d ",d.shape)
+ #print("Input y0 ",y0.shape)
+ for j in range(options['D_iter']):
+ # Update Adversary
+ _, mi_loss = \
+ sess.run([model.train_mi_ops, model.bound],
+ feed_dict={model.X:X, model.train: True})
+
+ for j in range(options['G_iter']):
+ # Update Feature Extractor & Lable Predictor np.array([0,0,1,1]).astype('float32')
+ _, tploss, tp_acc = \
+ sess.run([model.train_context_ops, model.y_loss, model.y_acc],
+ feed_dict={model.X: X, model.y: y0, model.train: True})
+
+ for j in range(options['G_iter']):
+ # Update Feature Extractor & Lable Predictor np.array([0,0,1,1]).astype('float32')
+ _, td_loss, td_acc = \
+ sess.run([model.train_domain_ops, model.d_loss, model.d_acc],
+ feed_dict={model.X: X, model.domains: d, model.train: True})
+
+ if i % 10 == 0:
+ print ('%s iter %d mi_loss: %.4f d_loss: %.4f p_acc: %.4f' % \
+ (description, i, mi_loss, td_loss, tp_acc))
+
+ '''
+ if i % valid_steps == 0:
+ # Calculate bound
+ # init_new_vars_op = tf.initialize_variables(model.domain_test_vars)
+ # sess.run(init_new_vars_op)
+
+ # for s in range(num_test_steps):
+ # X0_test, y0_test, d0_test = gen_source_batch.next()
+ # X1_test, y1_test, d1_test = gen_target_batch.next()
+
+ # X_test = np.concatenate([X0_test, X1_test], axis = 0)
+ # d_test = np.concatenate([d0_test, d1_test], axis = 0)
+
+ # _ = sess.run(model.test_domain_ops, feed_dict={model.X:X_test,
+ # model.domains: d_test, model.train: False})
+
+ # source_pq = utils.get_data_pq(sess, model, source_data_valid)
+ # target_pq = utils.get_data_pq(sess, model, target_data_valid)
+
+ # st_ratio = float(source_train['images'].shape[0]) / target_train['images'].shape[0]
+
+ # src_qp = source_pq[:,1] / source_pq[:,0] * st_ratio
+ # tgt_qp = target_pq[:,1] / target_pq[:,0] * st_ratio
+
+ # w_source_pq = np.copy(src_qp)
+ # w_source_pq[source_pq[:,0]=1),(target_pq[:,0]=1),(source_pq[:,0] best_valid:
+ best_params = utils.get_params(sess)
+ best_valid = target_valid_acc
+ best_acc = target_test_acc
+
+ labd = np.concatenate((source_test['domains'], target_test['domains']), axis = 0)
+ print ('src valid: %.4f tgt valid: %.4f tgt test: %.4f best: %.4f ' % \
+ (source_valid_acc, target_valid_acc, target_test_acc, best_acc) )
+
+ acc_store = '%.4f, %.4f, %.4f, %.4f \n'%(source_valid_acc, target_valid_acc, target_test_acc, best_acc)
+ fout.write(acc_store)
+ '''
+
+#训练结束后,将ModelArts容器内的训练输出拷贝到OBS
+mox.file.copy_parallel(savePath, save_path)
+
+# 训练结束后,将ModelArts容器内的训练输出拷贝到OBS
+# mox.file.copy_parallel(model_dir, args.train_url)
+# mox.file.copy_parallel(CONFIG.ROOT_DIR, args.train_url) #浮点异常数据保存至OBS
+# mox.file.copy_parallel(overflow_data_dir,args.train_url) #溢出数据保存至OBS
+mox.file.copy_parallel(profiling_dir, args.train_url) #性能数据保存至OBS
\ No newline at end of file
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/utils.py b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/utils.py
new file mode 100644
index 000000000..1c44d36ec
--- /dev/null
+++ b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/utils.py
@@ -0,0 +1,364 @@
+from npu_bridge.npu_init import *
+import tensorflow as tf
+import numpy as np
+import matplotlib.patches as mpatches
+# import matplotlib.pyplot as plt
+from mpl_toolkits.axes_grid1 import ImageGrid
+from sklearn import metrics
+import scipy
+# Model construction utilities below adapted from
+# https://www.tensorflow.org/versions/r0.8/tutorials/mnist/pros/index.html#deep-mnist-for-experts
+
+def get_params(sess):
+ variables = tf.trainable_variables()
+ params = {}
+ for i in range(len(variables)):
+ name = variables[i].name
+ params[name] = sess.run(variables[i])
+ return params
+
+
+def to_one_hot(x, N = -1):
+ x = x.astype('int32')
+ if np.min(x) !=0 and N == -1:
+ x = x - np.min(x)
+ x = x.reshape(-1)
+ if N == -1:
+ N = np.max(x) + 1
+ label = np.zeros((x.shape[0],N))
+ idx = range(x.shape[0])
+ label[idx,x] = 1
+ return label.astype('float32')
+
+def image_mean(x):
+ x_mean = x.mean((0, 1, 2))
+ return x_mean
+
+def shape(tensor):
+ """
+ Get the shape of a tensor. This is a compile-time operation,
+ meaning that it runs when building the graph, not running it.
+ This means that it cannot know the shape of any placeholders
+ or variables with shape determined by feed_dict.
+ """
+ return tuple([d.value for d in tensor.get_shape()])
+
+
+def fully_connected_layer(in_tensor, out_units):
+ """
+ Add a fully connected layer to the default graph, taking as input `in_tensor`, and
+ creating a hidden layer of `out_units` neurons. This should be done in a new variable
+ scope. Creates variables W and b, and computes activation_function(in * W + b).
+ """
+ _, num_features = shape(in_tensor)
+ weights = tf.get_variable(name = "weights", shape = [num_features, out_units], initializer = tf.truncated_normal_initializer(stddev=0.1))
+ biases = tf.get_variable( name = "biases", shape = [out_units], initializer=tf.constant_initializer(0.1))
+ return tf.matmul(in_tensor, weights) + biases
+
+
+def conv2d(in_tensor, filter_shape, out_channels):
+ """
+ Creates a conv2d layer. The input image (whish should already be shaped like an image,
+ a 4D tensor [N, W, H, C]) is convolved with `out_channels` filters, each with shape
+ `filter_shape` (a width and height). The ReLU activation function is used on the
+ output of the convolution.
+ """
+ _, _, _, channels = shape(in_tensor)
+ W_shape = filter_shape + [channels, out_channels]
+
+ # create variables
+ weights = tf.get_variable(name = "weights", shape = W_shape, initializer=tf.truncated_normal_initializer(stddev=0.1))
+ biases = tf.get_variable(name = "biases", shape = [out_channels], initializer= tf.constant_initializer(0.1))
+ conv = tf.nn.conv2d( in_tensor, weights, strides=[1, 1, 1, 1], padding='SAME')
+ h_conv = conv + biases
+ return h_conv
+
+
+#def conv1d(in_tensor, filter_shape, out_channels):
+# _, _, channels = shape(in_tensor)
+# W_shape = [filter_shape, channels, out_channels]
+#
+# W = tf.truncated_normal(W_shape, dtype = tf.float32, stddev = 0.1)
+# weights = tf.Variable(W, name = "weights")
+# b = tf.truncated_normal([out_channels], dtype = tf.float32, stddev = 0.1)
+# biases = tf.Variable(b, name = "biases")
+# conv = tf.nn.conv1d(in_tensor, weights, stride=1, padding='SAME')
+# h_conv = conv + biases
+# return h_conv
+
+def vars_from_scopes(scopes):
+ """
+ Returns list of all variables from all listed scopes. Operates within the current scope,
+ so if current scope is "scope1", then passing in ["weights", "biases"] will find
+ all variables in scopes "scope1/weights" and "scope1/biases".
+ """
+ current_scope = tf.get_variable_scope().name
+ #print(current_scope)
+ if current_scope != '':
+ scopes = [current_scope + '/' + scope for scope in scopes]
+ var = []
+ for scope in scopes:
+ for v in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=scope):
+ var.append(v)
+ return var
+
+def tfvar2str(tf_vars):
+ names = []
+ for i in range(len(tf_vars)):
+ names.append(tf_vars[i].name)
+ return names
+
+
+def shuffle_aligned_list(data):
+ """Shuffle arrays in a list by shuffling each array identically."""
+ num = data[0].shape[0]
+ p = np.random.permutation(num)
+ return [d[p] for d in data]
+
+def normalize_images(img_batch):
+ fl = tf.cast(img_batch, tf.float32)
+ return tf.map_fn(tf.image.per_image_standardization, fl)
+
+
+def batch_generator(data, batch_size, shuffle=True):
+ """Generate batches of data.
+
+ Given a list of array-like objects, generate batches of a given
+ size by yielding a list of array-like objects corresponding to the
+ same slice of each input.
+ """
+ if shuffle:
+ data = shuffle_aligned_list(data)
+
+ batch_count = 0
+ while True:
+ if batch_count * batch_size + batch_size >= len(data[0]):
+ batch_count = 0
+
+ if shuffle:
+ data = shuffle_aligned_list(data)
+
+ start = batch_count * batch_size
+ end = start + batch_size
+ batch_count += 1
+ yield [d[start:end] for d in data]
+
+
+def get_auc(predictions, labels):
+ fpr, tpr, thresholds = metrics.roc_curve(np.squeeze(labels).astype('float32'), np.squeeze(predictions).astype('float32'), pos_label=2)
+ return metrics.auc(fpr, tpr)
+
+
+def predictor_accuracy(predictions, labels):
+ """
+ Returns a number in [0, 1] indicating the percentage of `labels` predicted
+ correctly (i.e., assigned max logit) by `predictions`.
+ """
+ return tf.reduce_mean(tf.cast(tf.equal(tf.argmax(predictions, 1), tf.argmax(labels, 1)),tf.float32))
+
+def get_Wasser_distance(sess, model, data, L = 1, batch = 1024):
+ N = data.shape[0]
+ n = np.ceil(N/batch).astype(np.int32)
+ Wasser = np.zeros((N,L))
+ if L == 1:
+ Wasser = Wasser.reshape(-1)
+ l = np.float32(1.)
+ srt = 0
+ edn = 0
+ for i in range(n + 1):
+ srt = edn
+ edn = min(N, srt + batch - 1)
+ X = data[srt:edn]
+ if L == 1:
+ Wasser[srt:edn] = sess.run(model.d_pred,feed_dict={model.X: X.astype('float32'), model.lr_g:l, model.train: False})
+ else:
+ Wasser[srt:edn,:] = sess.run(model.d_pred,feed_dict={model.X: X.astype('float32'), model.lr_g:l, model.train: False})
+ return Wasser
+
+def get_data_pred(sess, model, obj_acc, data, labels, batch = 1024):
+ N = data.shape[0]
+ n = np.ceil(N/batch).astype(np.int32)
+ if obj_acc == 'feature':
+ temp = sess.run(model.features,feed_dict={model.X: data[0:2].astype('float32'), model.train: False})
+ pred = np.zeros((data.shape[0],temp.shape[1])).astype('float32')
+ else:
+ pred= np.zeros(labels.shape).astype('float32')
+ srt = 0
+ edn = 0
+ for i in range(n + 1):
+ srt = edn
+ edn = min(N, srt + batch)
+ X = data[srt:edn]
+ if obj_acc is 'y':
+ pred[srt:edn,:] = sess.run(model.y_pred,feed_dict={model.X: X.astype('float32'), model.train: False})
+ elif obj_acc is 'd':
+ if i == 0:
+ temp = sess.run(model.d_pred,feed_dict={model.X: X.astype('float32'), model.train: False})
+ pred= np.zeros((labels.shape[0], temp.shape[1])).astype('float32')
+ pred[srt:edn,:]= sess.run(model.d_pred,feed_dict={model.X: X.astype('float32'), model.train: False})
+ elif obj_acc is 'feature':
+ pred[srt:edn] = sess.run(model.features,feed_dict={model.X: X.astype('float32'), model.train: False})
+ return pred
+
+def get_data_pq(sess, model, data, batch = 1024):
+ N = data.shape[0]
+ n = np.ceil(N/batch).astype(np.int32)
+
+ z_pq = np.zeros([N, model.num_domains]).astype('float32')
+
+ srt = 0
+ edn = 0
+ for i in range(n + 1):
+ srt = edn
+ edn = min(N, srt + batch)
+ X = data[srt:edn]
+
+ z_pq[srt:edn,:] = sess.run(model.test_pq,feed_dict={model.X: X.astype('float32'), model.train: False})
+
+ return z_pq
+
+def get_feature(sess, model, data, batch = 1024):
+ N = data.shape[0]
+ n = np.ceil(N/batch).astype(np.int32)
+
+ feature = np.zeros([N, model.feature_dim]).astype('float32')
+
+ srt = 0
+ edn = 0
+ for i in range(n + 1):
+ srt = edn
+ edn = min(N, srt + batch)
+ X = data[srt:edn]
+
+ feature[srt:edn,:] = sess.run(model.features,feed_dict={model.X: X.astype('float32'), model.train: False})
+
+ return feature
+
+def get_y_loss(sess, model, data, label, batch = 1024):
+ N = data.shape[0]
+ n = np.ceil(N/batch).astype(np.int32)
+
+ y_loss = np.zeros(N).astype('float32')
+
+ srt = 0
+ edn = 0
+ for i in range(n + 1):
+ srt = edn
+ edn = min(N, srt + batch)
+ X = data[srt:edn]
+ y = label[srt:edn]
+
+ y_loss[srt:edn] = sess.run(model.y_loss,feed_dict={model.X: X.astype('float32'), model.y: y, model.train: False})
+
+ return y_loss
+
+
+def get_acc(pred, label):
+ if len(pred.shape) > 1:
+ pred = np.argmax(pred,axis = 1)
+ if len(label.shape) > 1:
+ label = np.argmax(label, axis = 1)
+ #pdb.set_trace()
+ acc = (pred == label).sum().astype('float32')
+ return acc/label.shape[0]
+
+
+# def imshow_grid(images, shape=[2, 8]):
+# """Plot images in a grid of a given shape."""
+# fig = plt.figure(1)
+# grid = ImageGrid(fig, 111, nrows_ncols=shape, axes_pad=0.05)
+
+# size = shape[0] * shape[1]
+# for i in range(size):
+# grid[i].axis('off')
+# grid[i].imshow(images[i]) # The AxesGrid object work as a list of axes.
+
+# plt.show()
+
+def dic2list(sources, targets):
+ names_dic = {}
+ for key in sources:
+ names_dic[sources[key]] = key
+ for key in targets:
+ names_dic[targets[key]] = key
+ names = []
+ for i in range(len(names_dic)):
+ names.append(names_dic[i])
+ return names
+
+# def plot_embedding(X, y, d, names, title=None):
+# """Plot an embedding X with the class label y colored by the domain d."""
+
+# x_min, x_max = np.min(X, 0), np.max(X, 0)
+# X = (X - x_min) / (x_max - x_min)
+# colors = np.array([[0.6,0.4,1.0,1.0],
+# [1.0,0.1,1.0,1.0],
+# [0.6,1.0,0.6,1.0],
+# [0.1,0.4,0.4,1.0],
+# [0.4,0.6,0.1,1.0],
+# [0.4,0.4,0.4,0.4]]
+# )
+# # Plot colors numbers
+# plt.figure(figsize=(10,10))
+# ax = plt.subplot(111)
+# for i in range(X.shape[0]):
+# # plot colored number
+# plt.text(X[i, 0], X[i, 1], str(y[i]),
+# color=colors[d[i]],
+# fontdict={'weight': 'bold', 'size': 9})
+
+# plt.xticks([]), plt.yticks([])
+# patches = []
+# for i in range(max(d)+1):
+# patches.append( mpatches.Patch(color=colors[i], label=names[i]))
+# plt.legend(handles=patches)
+# if title is not None:
+# plt.title(title)
+
+def load_plot(file_name):
+ mat = scipy.io.loadmat(file_name)
+ dann_tsne = mat['dann_tsne']
+ test_labels = mat['test_labels']
+ test_domains = mat['test_domains']
+ names = mat['names']
+ plot_embedding(dann_tsne, test_labels.argmax(1), test_domains.argmax(1), names, 'Domain Adaptation')
+
+
+
+def softmax(x):
+ """Compute softmax values for each sets of scores in x."""
+ e_x = np.exp(x - np.max(x))
+ return e_x / e_x.sum(axis=0)
+
+def norm_matrix(X, l):
+ Y = np.zeros(X.shape);
+ for i in range(X.shape[0]):
+ Y[i] = X[i]/np.linalg.norm(X[i],l)
+ return Y
+
+
+def description(sources, targets):
+ source_names = sources.keys()
+ #print(source_names)
+ target_names = targets.keys()
+ N = min(len(source_names), 4)
+
+ source_names = list(source_names) #将键转为list列表后可进行索引操作
+ target_names = list(target_names)
+ description = source_names[0] #源代码是source_names[0] 在python3中keys不允许切片,先转List再切片就好了
+ for i in range(1,N):
+ description = description + '_' + source_names[i]
+ description = description + '-' + target_names[0]
+ return description
+
+def channel_dropout(X, p):
+ if p == 0:
+ return X
+ mask = tf.random_uniform(shape = [tf.shape(X)[0], tf.shape(X)[2]])
+ mask = mask + 1 - p
+ mask = tf.floor(mask)
+ dropout = tf.expand_dims(mask,axis = 1) * X/(1-p)
+ return dropout
+
+def sigmoid(x):
+ return 1 / (1 + np.exp(-x))
--
Gitee
From 224e4010864b54c9bc89177a1fa8be645f0c9136 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BF=83=E6=80=9D=E7=BC=A0=E7=BB=B5?= <869083529@qq.com>
Date: Wed, 20 Jul 2022 09:39:43 +0000
Subject: [PATCH 07/16] =?UTF-8?q?=E6=96=B0=E5=BB=BA=20MI=5FDA?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/.keep | 0
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/.keep
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/.keep b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/.keep
new file mode 100644
index 000000000..e69de29bb
--
Gitee
From 464a8f536bc31349734dfb695b1fd82a7dee99b6 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BF=83=E6=80=9D=E7=BC=A0=E7=BB=B5?= <869083529@qq.com>
Date: Wed, 20 Jul 2022 09:40:21 +0000
Subject: [PATCH 08/16] code
---
.../CLUB_tf_wubo9826/MI_DA/MNISTModel_DANN.py | 469 ++++++++++++++++++
.../CLUB_tf_wubo9826/MI_DA/imageloader.py | 431 ++++++++++++++++
.../club/CLUB_tf_wubo9826/MI_DA/main_DANN.py | 383 ++++++++++++++
.../cv/club/CLUB_tf_wubo9826/MI_DA/utils.py | 364 ++++++++++++++
4 files changed, 1647 insertions(+)
create mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/MNISTModel_DANN.py
create mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/imageloader.py
create mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/main_DANN.py
create mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/utils.py
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/MNISTModel_DANN.py b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/MNISTModel_DANN.py
new file mode 100644
index 000000000..41ffd3c07
--- /dev/null
+++ b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/MNISTModel_DANN.py
@@ -0,0 +1,469 @@
+from npu_bridge.npu_init import *
+import tensorflow as tf
+import utils
+import math
+import tensorflow.contrib.layers as layers
+#import keras.backend as K
+
+
+def leaky_relu(x, a=0.1):
+ return tf.maximum(x, a * x)
+
+def noise(x, phase=True, std=1.0):
+ eps = tf.random_normal(tf.shape(x), 0.0, std)
+ output = tf.where(phase, x + eps, x)
+ return output
+
+class MNISTModel_DANN(object):
+ """Simple MNIST domain adaptation model."""
+ def __init__(self, options):
+ self.reg_disc = options['reg_disc']
+ self.reg_con = options['reg_con']
+ self.reg_tgt = options['reg_tgt']
+ self.lr_g = options['lr_g']
+ self.lr_d = options['lr_d']
+ self.sample_type = tf.float32
+ self.num_labels = options['num_labels']
+ self.num_domains = options['num_domains']
+ self.num_targets = options['num_targets']
+ self.sample_shape = options['sample_shape']
+ self.ef_dim = options['ef_dim']
+ self.latent_dim = options['latent_dim']
+ self.batch_size = options['batch_size']
+ self.initializer = tf.contrib.layers.xavier_initializer()
+ # self.initializer = tf.truncated_normal_initializer(stddev=0.1)
+ self.X = tf.placeholder(tf.as_dtype(self.sample_type), [None] + list(self.sample_shape), name="input_X")
+ #print(self.X)
+ #self.X = tf.placeholder(tf.as_dtype(self.sample_type), [None]+[28,28,3], name="input_X")
+ #print(self.X)
+ self.y = tf.placeholder(tf.float32, [None, self.num_labels], name="input_labels")
+ #print(self.y)
+ #self.y = tf.placeholder(tf.float32, [self.batch_size,self.num_labels], name="input_labels")
+ self.domains = tf.placeholder(tf.float32, [None, self.num_domains], name="input_domains")
+ self.train = tf.placeholder(tf.bool, [], name = 'train')
+ self._build_model()
+ self._setup_train_ops()
+
+ # def feature_extractor(self, reuse = False):
+ # input_X = utils.normalize_images(self.X)
+ # with tf.variable_scope('feature_extractor_conv1',reuse = reuse):
+ # h_conv1 = layers.conv2d(input_X, self.ef_dim, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_pool1 = layers.max_pool2d(h_conv1, [2, 2], 2, padding='SAME')
+
+ # with tf.variable_scope('feature_extractor_conv2',reuse = reuse):
+ # h_conv2 = layers.conv2d(h_pool1, self.ef_dim * 2, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_pool2 = layers.max_pool2d(h_conv2, [2, 2], 2, padding='SAME')
+
+ # with tf.variable_scope('feature_extractor_conv3',reuse = reuse):
+ # h_conv3 = layers.conv2d(h_pool2, self.ef_dim * 4, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_pool3 = layers.max_pool2d(h_conv3, [2, 2], 2, padding='SAME')
+
+ # with tf.variable_scope('feature_extractor_fc1'):
+ # fc_input = layers.flatten(h_pool3)
+ # fc_1 = layers.fully_connected(inputs=fc_input, num_outputs=self.latent_dim,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+
+ # self.features = fc_1
+ # feature_shape = self.features.get_shape()
+ # self.feature_dim = feature_shape[1].value
+
+ # self.features_src = tf.slice(self.features, [0, 0], [self.batch_size, -1])
+ # self.features_for_prediction = tf.cond(self.train, lambda: tf.slice(self.features, [0, 0], [self.batch_size, -1]), lambda: self.features)
+
+ # def feature_extractor(self, reuse = False):
+ # input_X = utils.normalize_images(self.X)
+ # with tf.variable_scope('feature_extractor_conv1',reuse = reuse):
+ # h_conv1 = layers.conv2d(input_X, self.ef_dim, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_conv1 = layers.conv2d(h_conv1, self.ef_dim, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_conv1 = layers.max_pool2d(h_conv1, [2, 2], 2, padding='SAME')
+
+ # with tf.variable_scope('feature_extractor_conv2',reuse = reuse):
+ # h_conv2 = layers.conv2d(h_conv1, self.ef_dim * 2, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_conv2 = layers.conv2d(h_conv2, self.ef_dim * 2, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_conv2 = layers.max_pool2d(h_conv2, [2, 2], 2, padding='SAME')
+
+ # with tf.variable_scope('feature_extractor_conv3',reuse = reuse):
+ # h_conv3 = layers.conv2d(h_conv2, self.ef_dim * 4, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_conv3 = layers.conv2d(h_conv3, self.ef_dim * 4, 3, stride=1,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # h_conv3 = layers.max_pool2d(h_conv3, [2, 2], 2, padding='SAME')
+
+ # with tf.variable_scope('feature_extractor_fc1'):
+ # # fc_input = tf.nn.dropout(layers.flatten(h_conv3), keep_prob = 0.9)
+ # fc_input = layers.flatten(h_conv3)
+ # fc_1 = layers.fully_connected(inputs=fc_input, num_outputs=self.latent_dim,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+
+ # self.features = fc_1
+ # feature_shape = self.features.get_shape()
+ # self.feature_dim = feature_shape[1].value
+ # self.features_src = tf.slice(self.features, [0, 0], [self.batch_size, -1])
+ # self.features_for_prediction = tf.cond(self.train, lambda: tf.slice(self.features, [0, 0], [self.batch_size, -1]), lambda: self.features)
+
+
+ def feature_extractor_c(self, reuse = False):
+ training = tf.cond(self.train, lambda: True, lambda: False)
+ X = layers.instance_norm(self.X)
+ with tf.variable_scope('feature_extractor_c', reuse = reuse):
+ h_conv1 = layers.conv2d(self.X, self.ef_dim*3, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
+ h_conv1 = layers.conv2d(h_conv1, self.ef_dim*3, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
+ h_conv1 = layers.conv2d(h_conv1, self.ef_dim*3, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
+ h_conv1 = layers.max_pool2d(h_conv1, 2, 2, padding='SAME')
+ h_conv1 = noise(tf.layers.dropout(h_conv1, rate=0.5, training=training), phase=training)
+
+
+
+ h_conv2 = layers.conv2d(h_conv1, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
+ h_conv2 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
+ h_conv2 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
+ h_conv2 = layers.max_pool2d(h_conv2, 2, 2, padding='SAME')
+ h_conv2 = noise(tf.layers.dropout(h_conv2, rate=0.5, training=training), phase=training)
+
+
+
+ h_conv3 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
+ h_conv3 = layers.conv2d(h_conv3, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
+ h_conv3 = layers.conv2d(h_conv3, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
+ h_conv3 = tf.reduce_mean(h_conv3, axis=[1, 2])
+
+ self.features_c = h_conv3
+ feature_shape = self.features_c.get_shape()
+ self.feature_c_dim = feature_shape[1].value
+ self.features_c_src = tf.slice(self.features_c, [0, 0], [self.batch_size, -1])
+ self.features_c_for_prediction = tf.cond(self.train, lambda: tf.slice(self.features_c, [0, 0], [self.batch_size, -1]), lambda: self.features_c)
+
+ def feature_extractor_d(self, reuse = False):
+ training = tf.cond(self.train, lambda: True, lambda: False)
+ X = layers.instance_norm(self.X)
+ print('sss',self.X)
+ with tf.variable_scope('feature_extractor_d', reuse = reuse):
+ h_conv1 = layers.conv2d(self.X, self.ef_dim*3, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
+ h_conv1 = layers.conv2d(h_conv1, self.ef_dim*3, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
+ h_conv1 = layers.conv2d(h_conv1, self.ef_dim*3, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
+ h_conv1 = layers.max_pool2d(h_conv1, 2, 2, padding='SAME')
+ h_conv1 = noise(tf.layers.dropout(h_conv1, rate=0.5, training=training), phase=training)
+
+
+
+ h_conv2 = layers.conv2d(h_conv1, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
+ h_conv2 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
+ h_conv2 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
+ h_conv2 = layers.max_pool2d(h_conv2, 2, 2, padding='SAME')
+ h_conv2 = noise(tf.layers.dropout(h_conv2, rate=0.5, training=training), phase=training)
+
+
+
+ h_conv3 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
+ h_conv3 = layers.conv2d(h_conv3, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
+ h_conv3 = layers.conv2d(h_conv3, self.ef_dim*6, 3, stride=1, padding='SAME',
+ activation_fn=None, weights_initializer=self.initializer)
+ h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
+ h_conv3 = tf.reduce_mean(h_conv3, axis=[1, 2])
+
+ self.features_d = h_conv3
+ # self.features_d_src = tf.slice(self.features_d, [0, 0], [self.batch_size, -1])
+ # self.features_d_for_prediction = tf.cond(self.train, lambda: tf.slice(self.features_d, [0, 0], [self.batch_size, -1]), lambda: self.features_d)
+
+ def mi_net(self, input_sample, reuse = False):
+ with tf.variable_scope('mi_net', reuse=reuse):
+ fc_1 = layers.fully_connected(inputs=input_sample, num_outputs=64, activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ fc_2 = layers.fully_connected(inputs=fc_1, num_outputs=1, activation_fn=None, weights_initializer=self.initializer)
+ return fc_2
+
+
+ def mine(self):
+ # tmp_1 = tf.random_shuffle(tf.range(self.batch_size))
+ # tmp_2 = tf.random_shuffle(tf.range(self.batch_size))
+ # shuffle_d_1 = tf.gather(tf.slice(tf.identity(self.features_d), [0, 0], [self.batch_size, -1]), tmp_1)
+ # shuffle_d_2 = tf.gather(tf.slice(tf.identity(self.features_d), [self.batch_size, 0], [self.batch_size, -1]), tmp_2)
+ # self.shuffle_d = tf.concat([shuffle_d_1, shuffle_d_2], axis = 0)
+ tmp = tf.random_shuffle(tf.range(self.batch_size*2))
+ self.shuffle_d = tf.gather(self.features_d, tmp)
+
+ input_0 = tf.concat([self.features_c,self.features_d], axis = -1)
+ input_1 = tf.concat([self.features_c,self.shuffle_d], axis = -1)
+
+ T_0 = self.mi_net(input_0)
+ T_1 = self.mi_net(input_1, reuse=True)
+
+ E_pos = math.log(2.) - tf.nn.softplus(-T_0)
+ E_neg = tf.nn.softplus(-T_1) + T_1 - math.log(2.)
+
+ # grad = tf.gradients(mi_l, [self.features_c, self.features_d, self.shuffle_d])
+ # pdb.set_trace()
+ # self.penalty = tf.reduce_mean(tf.square(tf.reduce_sum(tf.square(grad))-1.))
+ self.bound = tf.reduce_mean(E_pos - E_neg)
+
+
+ def club(self, reuse=False):
+ with tf.variable_scope('mi_net', reuse=reuse):
+ p_0 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ prediction = layers.fully_connected(inputs=p_0, num_outputs=int(self.features_d.shape[1]), activation_fn=None, weights_initializer=self.initializer)
+
+ p_1 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ prediction_1 = layers.fully_connected(inputs=p_1, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.tanh, weights_initializer=self.initializer)
+
+ mu = prediction
+ logvar = prediction_1
+
+ prediction_tile = tf.tile(tf.expand_dims(prediction, dim=1), tf.constant([1, self.batch_size*2, 1], tf.int32))
+ features_d_tile = tf.tile(tf.expand_dims(self.features_d, dim=0), tf.constant([self.batch_size*2, 1, 1], tf.int32))
+
+ positive = -(mu - self.features_d)**2/2./tf.exp(logvar)
+ negative = -tf.reduce_mean((features_d_tile-prediction_tile)**2, 1)/2./tf.exp(logvar)
+
+ # positive = -(prediction-self.features_d)**2
+ # negative = -tf.reduce_mean((features_d_tile-prediction_tile)**2, 1)
+
+ self.lld = tf.reduce_mean(tf.reduce_sum(positive, -1))
+ self.bound = tf.reduce_mean(tf.reduce_sum(positive, -1)-tf.reduce_sum(negative, -1))
+
+ def club_sample(self, reuse=False):
+ with tf.variable_scope('mi_net', reuse=reuse):
+ p_0 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ prediction = layers.fully_connected(inputs=p_0, num_outputs=int(self.features_d.shape[1]), activation_fn=None, weights_initializer=self.initializer)
+
+ p_1 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ prediction_1 = layers.fully_connected(inputs=p_1, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.tanh, weights_initializer=self.initializer)
+
+ mu = prediction
+ logvar = prediction_1
+
+ tmp = tf.random_shuffle(tf.range(self.batch_size*2))
+ self.shuffle_d = tf.gather(self.features_d, tmp)
+
+ positive = -(mu - self.features_d)**2/2./tf.exp(logvar)
+ negative = -(mu - self.shuffle_d)**2/2./tf.exp(logvar)
+
+ self.lld = tf.reduce_mean(tf.reduce_sum(positive, -1))
+ self.bound = tf.reduce_mean(tf.reduce_sum(positive, -1)-tf.reduce_sum(negative, -1))
+
+ def NWJ(self, reuse=False):
+ features_c_tile = tf.tile(tf.expand_dims(self.features_c, dim=0), tf.constant([self.batch_size*2, 1, 1], tf.int32))
+ features_d_tile = tf.tile(tf.expand_dims(self.features_d, dim=1), tf.constant([1, self.batch_size*2, 1], tf.int32))
+ input_0 = tf.concat([self.features_c, self.features_d], axis = -1)
+ input_1 = tf.concat([features_c_tile, features_d_tile], axis = -1)
+
+ T_0 = self.mi_net(input_0)
+ T_1 = self.mi_net(input_1, reuse=True) - 1.
+
+ self.bound = tf.reduce_mean(T_0) - tf.reduce_mean(tf.exp(tf.reduce_logsumexp(T_1, 1) - math.log(self.batch_size*2)))
+
+ def VUB(self, reuse=False):
+ with tf.variable_scope('mi_net', reuse=reuse):
+ p_0 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ prediction = layers.fully_connected(inputs=p_0, num_outputs=int(self.features_d.shape[1]), activation_fn=None, weights_initializer=self.initializer)
+
+ p_1 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ prediction_1 = layers.fully_connected(inputs=p_1, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.tanh, weights_initializer=self.initializer)
+
+ mu = prediction
+ logvar = prediction_1
+
+ self.lld = tf.reduce_mean(tf.reduce_sum(-(mu-self.features_d)**2 / tf.exp(logvar) - logvar, -1))
+ self.bound = 1. / 2. * tf.reduce_mean(mu**2 + tf.exp(logvar) - 1. - logvar)
+
+ def L1OutUB(self, reuse=False):
+ with tf.variable_scope('mi_net', reuse=reuse):
+ p_0 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ prediction = layers.fully_connected(inputs=p_0, num_outputs=int(self.features_d.shape[1]), activation_fn=None, weights_initializer=self.initializer)
+
+ p_1 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ prediction_1 = layers.fully_connected(inputs=p_1, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.tanh, weights_initializer=self.initializer)
+
+ mu = prediction
+ logvar = prediction_1
+
+ positive = tf.reduce_sum(-(mu - self.features_d)**2/2./tf.exp(logvar) - logvar/2., -1)
+
+ prediction_tile = tf.tile(tf.expand_dims(prediction, dim=1), tf.constant([1, self.batch_size*2, 1], tf.int32))
+ prediction_1_tile = tf.tile(tf.expand_dims(prediction_1, dim=1), tf.constant([1, self.batch_size*2, 1], tf.int32))
+ features_d_tile = tf.tile(tf.expand_dims(self.features_d, dim=0), tf.constant([self.batch_size*2, 1, 1], tf.int32))
+
+ all_probs = tf.reduce_sum(-(features_d_tile-prediction_tile)**2/2./tf.exp(prediction_1_tile) - prediction_1_tile/2., -1)
+ diag_mask = tf.diag([-20.]*self.batch_size*2)
+
+ negative = tf.reduce_logsumexp(all_probs + diag_mask, 0) - math.log(self.batch_size*2 - 1.)
+ self.bound = tf.reduce_mean(positive-negative)
+ self.lld = tf.reduce_mean(tf.reduce_sum(-(mu - self.features_d)**2/tf.exp(logvar) - logvar, -1))
+
+
+ def nce(self):
+
+ features_c_tile = tf.tile(tf.expand_dims(self.features_c, dim=0), tf.constant([self.batch_size*2, 1, 1], tf.int32))
+ features_d_tile = tf.tile(tf.expand_dims(self.features_d, dim=1), tf.constant([1, self.batch_size*2, 1], tf.int32))
+ input_0 = tf.concat([self.features_c, self.features_d], axis = -1)
+ input_1 = tf.concat([features_c_tile, features_d_tile], axis = -1)
+
+ T_0 = self.mi_net(input_0)
+ T_1 = tf.reduce_mean(self.mi_net(input_1, reuse=True), axis=1)
+
+ E_pos = math.log(2.) - tf.nn.softplus(-T_0)
+ E_neg = tf.nn.softplus(-T_1) + T_1 - math.log(2.)
+
+ self.bound = tf.reduce_mean(E_pos - E_neg)
+
+
+ def label_predictor(self):
+ # with tf.variable_scope('label_predictor_fc1'):
+ # fc_1 = layers.fully_connected(inputs=self.features_for_prediction, num_outputs=self.latent_dim,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ with tf.variable_scope('label_predictor_logits'):
+ logits = layers.fully_connected(inputs=self.features_c_for_prediction, num_outputs=self.num_labels,
+ activation_fn=None, weights_initializer=self.initializer)
+
+ self.y_pred = tf.nn.softmax(logits)
+ self.y_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = self.y))
+ self.y_acc = utils.predictor_accuracy(self.y_pred, self.y)
+
+
+ def domain_predictor(self, reuse = False):
+ with tf.variable_scope('domain_predictor_fc1', reuse = reuse):
+ fc_1 = layers.fully_connected(inputs=self.features_d, num_outputs=self.latent_dim,
+ activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ with tf.variable_scope('domain_predictor_logits', reuse = reuse):
+ self.d_logits = layers.fully_connected(inputs=fc_1, num_outputs=self.num_domains,
+ activation_fn=None, weights_initializer=self.initializer)
+
+
+ logits_real = tf.slice(self.d_logits, [0, 0], [self.batch_size, -1])
+ logits_fake = tf.slice(self.d_logits, [self.batch_size, 0], [self.batch_size * self.num_targets, -1])
+
+ label_real = tf.slice(self.domains, [0, 0], [self.batch_size, -1])
+ label_fake = tf.slice(self.domains, [self.batch_size, 0], [self.batch_size * self.num_targets, -1])
+ label_pseudo = tf.ones(label_fake.shape) - label_fake
+
+ self.d_pred = tf.nn.sigmoid(self.d_logits)
+ real_d_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = logits_real, labels = label_real))
+ fake_d_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = logits_fake, labels = label_fake))
+ self.d_loss = real_d_loss + self.reg_tgt * fake_d_loss
+ self.d_acc = utils.predictor_accuracy(self.d_pred, self.domains)
+
+ # def domain_test(self, reuse=False):
+ # with tf.variable_scope('domain_test_fc1', reuse = reuse):
+ # fc_1 = layers.fully_connected(inputs=self.features, num_outputs=self.latent_dim,
+ # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
+ # with tf.variable_scope('domain_test_logits', reuse = reuse):
+ # d_logits = layers.fully_connected(inputs=fc_1, num_outputs=self.num_domains,
+ # activation_fn=None, weights_initializer=self.initializer)
+
+ # logits_real = tf.slice(d_logits, [0, 0], [self.batch_size, -1])
+ # logits_fake = tf.slice(d_logits, [self.batch_size, 0], [self.batch_size * self.num_targets, -1])
+
+ # self.test_pq = tf.nn.softmax(d_logits)
+
+ # label_real = tf.slice(self.domains, [0, 0], [self.batch_size, -1])
+ # label_fake = tf.slice(self.domains, [self.batch_size, 0], [self.batch_size * self.num_targets, -1])
+
+ # real_d_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = logits_real, labels = label_real))
+ # fake_d_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = logits_fake, labels = label_fake))
+ # self.d_test = real_d_loss + fake_d_loss
+
+ # def distance(self, a, b):
+ # a_matrix = tf.tile(tf.expand_dims(a, 0), [a.shape[0], 1, 1])
+ # b_matrix = tf.tile(tf.expand_dims(b, 0), [b.shape[0], 1, 1])
+ # b_matrix = tf.transpose(b_matrix, [1,0,2])
+ # distance = K.sqrt(K.maximum(K.sum(K.square(a_matrix - b_matrix), axis=2), K.epsilon()))
+ # return distance
+
+ # def calculate_mask(self, idx, idx_2):
+ # idx_matrix = tf.tile(tf.expand_dims(idx, 0), [idx.shape[0], 1])
+ # idx_2_matrix = tf.tile(tf.expand_dims(idx_2, 0), [idx_2.shape[0], 1])
+ # idx_2_transpose = tf.transpose(idx_2_matrix, [1,0])
+ # mask = tf.cast(tf.equal(idx_matrix, idx_2_transpose), tf.float32)
+ # return mask
+
+ # def contrastive_loss(self, y_true, y_pred, hinge=1.0):
+ # margin = hinge
+ # sqaure_pred = K.square(y_pred)
+ # margin_square = K.square(K.maximum(margin - y_pred, 0))
+ # return K.mean(y_true * sqaure_pred + (1 - y_true) * margin_square)
+
+ def _build_model(self):
+ self.feature_extractor_c()
+ self.feature_extractor_d()
+ self.label_predictor()
+ self.domain_predictor()
+ self.club()
+ # self.domain_test()
+
+ # self.src_pred = tf.argmax(tf.slice(self.y_pred, [0, 0], [self.batch_size, -1]), axis=-1)
+ # self.distance = self.distance(self.features_src, self.features_src)
+ # self.batch_compare = self.calculate_mask(self.src_pred, self.src_pred)
+ # self.con_loss = self.contrastive_loss(self.batch_compare, self.distance)
+
+ self.context_loss = self.y_loss + 0.1 * self.bound# + self.reg_con*self.con_loss
+ self.domain_loss = self.d_loss
+
+ def _setup_train_ops(self):
+ context_vars = utils.vars_from_scopes(['feature_extractor_c', 'label_predictor'])
+ domain_vars = utils.vars_from_scopes(['feature_extractor_d', 'domain_predictor'])
+ mi_vars = utils.vars_from_scopes(['mi_net'])
+ self.domain_test_vars = utils.vars_from_scopes(['domain_test'])
+
+ # 源代码 没加LossScale
+ self.train_context_ops = tf.train.AdamOptimizer(self.lr_g,0.5).minimize(self.context_loss, var_list = context_vars)
+ self.train_domain_ops = tf.train.AdamOptimizer(self.lr_d,0.5).minimize(self.domain_loss, var_list = domain_vars)
+ self.train_mi_ops = tf.train.AdamOptimizer(self.lr_d,0.5).minimize(-self.lld, var_list = mi_vars)
+ # self.test_domain_ops = tf.train.AdamOptimizer(self.lr_d,0.5).minimize(self.d_test, var_list = self.domain_test_vars)
+
+ # # 添加LossScale后
+ # trainContextOps = tf.train.AdamOptimizer(self.lr_g,0.5)
+ # trainDomainOps = tf.train.AdamOptimizer(self.lr_d,0.5)
+ # trainMiOps = tf.train.AdamOptimizer(self.lr_d,0.5)
+ #
+ # loss_scale_manager1 = ExponentialUpdateLossScaleManager(init_loss_scale=2**32, incr_every_n_steps=1000, decr_every_n_nan_or_inf=2, decr_ratio=0.5)
+ # self.train_context_ops = NPULossScaleOptimizer(trainContextOps,loss_scale_manager1).minimize(self.context_loss, var_list = context_vars)
+ #
+ # loss_scale_manager2 = ExponentialUpdateLossScaleManager(init_loss_scale=2 ** 32, incr_every_n_steps=1000, decr_every_n_nan_or_inf=2, decr_ratio=0.5)
+ # self.train_domain_ops = NPULossScaleOptimizer(trainDomainOps,loss_scale_manager2).minimize(self.domain_loss, var_list = domain_vars)
+ #
+ # loss_scale_manager3 = ExponentialUpdateLossScaleManager(init_loss_scale=2 ** 32, incr_every_n_steps=1000, decr_every_n_nan_or_inf=2, decr_ratio=0.5)
+ # self.train_mi_ops = NPULossScaleOptimizer(trainMiOps,loss_scale_manager3).minimize(-self.lld, var_list = mi_vars)
+
+
+
+
+
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/imageloader.py b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/imageloader.py
new file mode 100644
index 000000000..0980af08d
--- /dev/null
+++ b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/imageloader.py
@@ -0,0 +1,431 @@
+from npu_bridge.npu_init import *
+import numpy as np
+import pickle
+from tensorflow.examples.tutorials.mnist import input_data
+import scipy.io
+from utils import to_one_hot
+import pdb
+import cv2
+import os
+#from scipy.misc import imsave
+
+def load_datasets(data_dir = './', sets={'mnist':1, 'svhn':1, 'mnistm':1, 'usps':1}):
+ datasets = {}
+ for key in sets.keys():
+ datasets[key] = {}
+ if sets.get('mnist'): #原代码是sets.has_key('mnist'),不支持python3.X
+ mnist = input_data.read_data_sets(data_dir + 'MNIST_data', one_hot=True)
+ mnist_train = (mnist.train.images.reshape(55000, 28, 28, 1) * 255).astype(np.uint8)
+ mnist_train = np.concatenate([mnist_train, mnist_train, mnist_train], 3)
+
+ # mnist_inv = mnist_train * (-1) + 255
+ # mnist_train = np.concatenate([mnist_train, mnist_inv])
+ mnist_test = (mnist.test.images.reshape(10000, 28, 28, 1) * 255).astype(np.uint8)
+ mnist_test = np.concatenate([mnist_test, mnist_test, mnist_test], 3)
+ mnist_valid = (mnist.validation.images.reshape(5000, 28, 28, 1) * 255).astype(np.uint8)
+ mnist_valid = np.concatenate([mnist_valid, mnist_valid, mnist_valid], 3)
+ # dataset['mnist']['train'] = {'images': mnist_train, 'labels': np.concatenate([mnist.train.labels, mnist.train.labels])}
+ datasets['mnist']['train'] = {'images': mnist_train, 'labels': mnist.train.labels}
+ datasets['mnist']['test'] = {'images': mnist_test, 'labels': mnist.test.labels}
+ datasets['mnist']['valid'] = {'images': mnist_valid, 'labels': mnist.validation.labels}
+
+
+ if sets.get('mnist32'):
+ mnist = input_data.read_data_sets(data_dir + 'MNIST_data', one_hot=True)
+ mnist_train = (mnist.train.images.reshape(55000, 28, 28, 1) * 255).astype(np.uint8)
+ mnist_train = np.concatenate([mnist_train, mnist_train, mnist_train], 3)
+ mnist_test = (mnist.test.images.reshape(10000, 28, 28, 1) * 255).astype(np.uint8)
+ mnist_test = np.concatenate([mnist_test, mnist_test, mnist_test], 3)
+ mnist_valid = (mnist.validation.images.reshape(5000, 28, 28, 1) * 255).astype(np.uint8)
+ mnist_valid = np.concatenate([mnist_valid, mnist_valid, mnist_valid], 3)
+
+ mnist_train = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in mnist_train]
+ mnist_train = np.concatenate(mnist_train)
+ mnist_test = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in mnist_test]
+ mnist_test = np.concatenate(mnist_test)
+ mnist_valid = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in mnist_valid]
+ mnist_valid = np.concatenate(mnist_valid)
+
+ datasets['mnist32']['train'] = {'images': mnist_train, 'labels': mnist.train.labels}
+ datasets['mnist32']['test'] = {'images': mnist_test, 'labels': mnist.test.labels}
+ datasets['mnist32']['valid'] = {'images': mnist_valid, 'labels': mnist.validation.labels}
+
+ # if sets.has_key('svhn'):
+ # svhn = scipy.io.loadmat(data_dir + 'SVHN/svhn.mat')
+ # svhn_train = svhn['train'].astype(np.uint8)
+ # svhn_labtrain = svhn['labtrain'].astype(np.int32)
+ # svhn_valid = svhn['val'].astype(np.uint8)
+ # svhn_labval= svhn['labval'].astype(np.int32)
+ # svhn_test = svhn['test'].astype(np.uint8)
+ # svhn_labtest =svhn['labtest'].astype(np.int32)
+ # dataset['svhn']['train'] = {'images': svhn_train, 'labels': svhn_labtrain}
+ # dataset['svhn']['test'] = {'images': svhn_test, 'labels': svhn_labtest}
+ # dataset['svhn']['valid'] = {'images': svhn_valid, 'labels': svhn_labval}
+
+ if sets.get('svhn'):
+ svhn_train = scipy.io.loadmat(data_dir + 'SVHN/train_32x32.mat')
+ svhn_train_data = svhn_train['X'].transpose((3,0,1,2)).astype(np.uint8)
+
+ svhn_train_label = svhn_train['y'] + 1
+ svhn_train_label[svhn_train_label > 10] = 1
+ svhn_train_label = to_one_hot(svhn_train_label)
+
+ svhn_valid_data = svhn_train_data[-5000:]
+ svhn_train_data = svhn_train_data[:-5000]
+
+ svhn_valid_label = svhn_train_label[-5000:]
+ svhn_train_label = svhn_train_label[:-5000]
+
+ svhn_test = scipy.io.loadmat(data_dir + 'SVHN/test_32x32.mat')
+ svhn_test_data = svhn_test['X'].transpose((3,0,1,2)).astype(np.uint8)
+
+ svhn_test_label = svhn_test['y'] + 1
+ svhn_test_label[svhn_test_label > 10] = 1
+ svhn_test_label = to_one_hot(svhn_test_label)
+
+ # svhn_train_data = [np.expand_dims(cv2.resize(x, dsize=(28,28)), 0) for x in svhn_train_data]
+ # svhn_train_data = np.concatenate(svhn_train_data)
+ # svhn_test_data = [np.expand_dims(cv2.resize(x, dsize=(28,28)), 0) for x in svhn_test_data]
+ # svhn_test_data = np.concatenate(svhn_test_data)
+ # svhn_valid_data = [np.expand_dims(cv2.resize(x, dsize=(28,28)), 0) for x in svhn_valid_data]
+ # svhn_valid_data = np.concatenate(svhn_valid_data)
+
+ svhn_train_data = svhn_train_data[:,2:30,2:30,:]
+ svhn_test_data = svhn_test_data[:,2:30,2:30,:]
+ svhn_valid_data = svhn_valid_data[:,2:30,2:30,:]
+
+
+
+ datasets['svhn']['train'] = {'images': svhn_train_data, 'labels': svhn_train_label}
+ datasets['svhn']['test'] = {'images': svhn_test_data, 'labels': svhn_test_label}
+ datasets['svhn']['valid'] = {'images': svhn_valid_data, 'labels': svhn_valid_label}
+
+ if sets.get('mnistm'):
+ if 'mnist' not in locals():
+ mnist = input_data.read_data_sets(data_dir + 'MNIST_data', one_hot=True)
+ mnistm = pickle.load(open(data_dir + 'MNISTM/mnistm_data.pkl', 'rb'))
+ mnistm_train = mnistm['train']
+ mnistm_test = mnistm['test']
+ mnistm_valid = mnistm['valid']
+
+ datasets['mnistm']['train'] = {'images': mnistm_train, 'labels': mnist.train.labels}
+ datasets['mnistm']['test'] = {'images': mnistm_test, 'labels': mnist.test.labels}
+ datasets['mnistm']['valid'] = {'images': mnistm_valid, 'labels': mnist.validation.labels}
+ if sets.get('usps'):
+ usps_file = open(data_dir + 'USPS/usps_28x28.pkl', 'rb')
+ usps = pickle.load(usps_file)
+ n = 5104
+ usps_train = (usps[0][0][:n].reshape(-1,28,28,1)*255.).astype('uint8')
+ usps_train = np.concatenate([usps_train, usps_train, usps_train], 3)
+ usps_valid = (usps[0][0][n:].reshape(-1,28,28,1)*255.).astype('uint8')
+ usps_valid = np.concatenate([usps_valid, usps_valid, usps_valid], 3)
+ usps_test = (usps[1][0].reshape(-1,28,28,1)*255.).astype('uint8')
+ usps_test = np.concatenate([usps_test, usps_test, usps_test], 3)
+ usps_images = (np.concatenate([usps[0][0], usps[1][0]]).reshape(-1, 28, 28, 1) * 255.).astype(np.uint8)
+ usps_images = np.concatenate([usps_images, usps_images, usps_images], 3)
+
+ datasets['usps']['train'] = {'images': usps_train, 'labels': to_one_hot(usps[0][1][:n])}
+ datasets['usps']['test'] = {'images': usps_test, 'labels': to_one_hot(usps[1][1])}
+ datasets['usps']['valid'] = {'images': usps_valid, 'labels': to_one_hot(usps[0][1][n:])}
+
+ if sets.get('cifar'):
+ batch_1 = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/data_batch_1.mat')
+ batch_2 = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/data_batch_2.mat')
+ batch_3 = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/data_batch_3.mat')
+ batch_4 = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/data_batch_4.mat')
+ batch_5 = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/data_batch_5.mat')
+ batch_test = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/test_batch.mat')
+
+ train_batch = np.concatenate([batch_1['data'], batch_2['data'], batch_3['data'],
+ batch_4['data'], batch_5['data']])
+ train_label = np.concatenate([batch_1['labels'], batch_2['labels'], batch_3['labels'],
+ batch_4['labels'], batch_5['labels']])
+
+ cifar_train_data = np.reshape(train_batch, [-1, 3, 32, 32]).transpose((0,2,3,1))
+ cifar_train_label = to_one_hot(np.squeeze(train_label))
+
+ cifar_train_data_reduce = cifar_train_data[cifar_train_label[:,6]==0]
+ cifar_train_label_tmp = cifar_train_label[cifar_train_label[:,6]==0]
+ cifar_train_label_reduce = np.concatenate([cifar_train_label_tmp[:,:6], cifar_train_label_tmp[:,7:]], axis=1)
+
+ # cifar_valid_data = cifar_train_data[-5000:]
+ # cifar_train_data = cifar_train_data[:-5000]
+
+ # cifar_valid_label = cifar_train_label[-5000:]
+ # cifar_train_label = cifar_train_label[:-5000]
+
+ cifar_valid_data_reduce = cifar_train_data_reduce[-5000:]
+ cifar_train_data_reduce = cifar_train_data_reduce[:-5000]
+
+ cifar_valid_label_reduce = cifar_train_label_reduce[-5000:]
+ cifar_train_label_reduce = cifar_train_label_reduce[:-5000]
+
+ cifar_test_data = np.reshape(batch_test['data'], [-1, 3, 32, 32]).transpose((0,2,3,1))
+ cifar_test_label = to_one_hot(np.squeeze(batch_test['labels']))
+
+ cifar_test_data_reduce = cifar_test_data[cifar_test_label[:,6]==0]
+ cifar_test_label_tmp = cifar_test_label[cifar_test_label[:,6]==0]
+ cifar_test_label_reduce = np.concatenate([cifar_test_label_tmp[:,:6], cifar_test_label_tmp[:,7:]], axis=1)
+
+ datasets['cifar']['train'] = {'images': cifar_train_data_reduce, 'labels': cifar_train_label_reduce}
+ datasets['cifar']['test'] = {'images': cifar_test_data_reduce, 'labels': cifar_test_label_reduce}
+ datasets['cifar']['valid'] = {'images': cifar_valid_data_reduce, 'labels': cifar_valid_label_reduce}
+
+ if sets.get('stl'):
+ stl_train = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/stl10_matlab/train.mat')
+ stl_train_data = np.reshape(stl_train['X'], [-1, 3, 96, 96]).transpose((0,3,2,1))
+
+ stl_train_label = np.squeeze(stl_train['y']-1)
+
+ stl_train_label_tmp = np.zeros([stl_train_label.shape[0], 10])
+
+ stl_train_label_tmp[stl_train_label==0,0]=1.
+ stl_train_label_tmp[stl_train_label==1,2]=1.
+ stl_train_label_tmp[stl_train_label==2,1]=1.
+ stl_train_label_tmp[stl_train_label==3,3]=1.
+ stl_train_label_tmp[stl_train_label==4,4]=1.
+ stl_train_label_tmp[stl_train_label==5,5]=1.
+ stl_train_label_tmp[stl_train_label==6,7]=1.
+ stl_train_label_tmp[stl_train_label==7,6]=1.
+ stl_train_label_tmp[stl_train_label==8,8]=1.
+ stl_train_label_tmp[stl_train_label==9,9]=1.
+
+
+ stl_train_data_reduce = stl_train_data[stl_train_label_tmp[:,6]==0]
+ stl_train_label_tmp = stl_train_label_tmp[stl_train_label_tmp[:,6]==0]
+ stl_train_label_reduce = np.concatenate([stl_train_label_tmp[:,:6], stl_train_label_tmp[:,7:]], axis=1)
+
+
+ stl_test = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/stl10_matlab/test.mat')
+ stl_test_data = np.reshape(stl_test['X'], [-1, 3, 96, 96]).transpose((0,3,2,1))
+
+ stl_test_label = np.squeeze(stl_test['y']-1)
+
+ stl_test_label_tmp = np.zeros([stl_test_label.shape[0], 10])
+
+ stl_test_label_tmp[stl_test_label==0,0]=1.
+ stl_test_label_tmp[stl_test_label==1,2]=1.
+ stl_test_label_tmp[stl_test_label==2,1]=1.
+ stl_test_label_tmp[stl_test_label==3,3]=1.
+ stl_test_label_tmp[stl_test_label==4,4]=1.
+ stl_test_label_tmp[stl_test_label==5,5]=1.
+ stl_test_label_tmp[stl_test_label==6,7]=1.
+ stl_test_label_tmp[stl_test_label==7,6]=1.
+ stl_test_label_tmp[stl_test_label==8,8]=1.
+ stl_test_label_tmp[stl_test_label==9,9]=1.
+
+ stl_test_data_reduce = stl_test_data[stl_test_label_tmp[:,6]==0]
+ stl_test_label_tmp = stl_test_label_tmp[stl_test_label_tmp[:,6]==0]
+ stl_test_label_reduce = np.concatenate([stl_test_label_tmp[:,:6], stl_test_label_tmp[:,7:]], axis=1)
+
+
+ stl_valid_data_reduce = stl_train_data_reduce[-500:]
+ stl_train_data_reduce = stl_train_data_reduce[:-500]
+
+ stl_valid_label_reduce = stl_train_label_reduce[-500:]
+ stl_train_label_reduce = stl_train_label_reduce[:-500]
+
+ stl_train_data_reduce = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in stl_train_data_reduce]
+ stl_train_data_reduce = np.concatenate(stl_train_data_reduce)
+ stl_test_data_reduce = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in stl_test_data_reduce]
+ stl_test_data_reduce = np.concatenate(stl_test_data_reduce)
+ stl_valid_data_reduce = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in stl_valid_data_reduce]
+ stl_valid_data_reduce = np.concatenate(stl_valid_data_reduce)
+
+ datasets['stl']['train'] = {'images': stl_train_data_reduce, 'labels': stl_train_label_reduce}
+ datasets['stl']['test'] = {'images': stl_test_data_reduce, 'labels': stl_test_label_reduce}
+ datasets['stl']['valid'] = {'images': stl_valid_data_reduce, 'labels': stl_valid_label_reduce}
+
+ return datasets
+
+
+def save_dataset(datasets,save_path = 'imp./save_datasets/'):
+ if not os.path.exists(save_path):
+ os.mkdir(save_path)
+ for key in datasets.keys():
+ train = datasets[key]['train']['images']
+ valid = datasets[key]['valid']['images']
+ test = datasets[key]['test']['images']
+ labtrain = datasets[key]['train']['labels']
+ labval = datasets[key]['valid']['labels']
+ labtest = datasets[key]['test']['labels']
+ scipy.io.savemat(save_path + key + '.mat',{'train':train, 'val':valid,'test':test,'labtrain':labtrain,'labval':labval,'labtest':labtest})
+ return 0
+
+def sets_concatenate(datasets, sets):
+ N_train = 0
+ N_valid = 0
+ N_test = 0
+
+ for key in sets:
+ label_len = datasets[key]['train']['labels'].shape[1]
+ N_train = N_train + datasets[key]['train']['images'].shape[0]
+ N_valid = N_valid + datasets[key]['valid']['images'].shape[0]
+ N_test = N_test + datasets[key]['test']['images'].shape[0]
+ S = datasets[key]['train']['images'].shape[1]
+ train = {'images': np.zeros((N_train,S,S,3)).astype(np.float32),'labels':np.zeros((N_train,label_len)).astype('float32'),'domains':np.zeros((N_train,)).astype('float32')}
+ valid = {'images': np.zeros((N_valid,S,S,3)).astype(np.float32),'labels':np.zeros((N_valid,label_len)).astype('float32'),'domains':np.zeros((N_valid,)).astype('float32')}
+ test = {'images': np.zeros((N_test,S,S,3)).astype(np.float32),'labels':np.zeros((N_test,label_len)).astype('float32'),'domains':np.zeros((N_test,)).astype('float32')}
+ srt = 0
+ edn = 0
+ for key in sets:
+ domain = sets[key]
+ srt = edn
+ edn = srt + datasets[key]['train']['images'].shape[0]
+ train['images'][srt:edn,:,:,:] = datasets[key]['train']['images']
+ train['labels'][srt:edn,:] = datasets[key]['train']['labels']
+ train['domains'][srt:edn] = domain * np.ones((datasets[key]['train']['images'].shape[0],)).astype('float32')
+ srt = 0
+ edn = 0
+ for key in sets:
+ domain = sets[key]
+ srt = edn
+ edn = srt + datasets[key]['valid']['images'].shape[0]
+ valid['images'][srt:edn,:,:,:] = datasets[key]['valid']['images']
+ valid['labels'][srt:edn,:] = datasets[key]['valid']['labels']
+ valid['domains'][srt:edn] = domain * np.ones((datasets[key]['valid']['images'].shape[0],)).astype('float32')
+ srt = 0
+ edn = 0
+ for key in sets:
+ domain = sets[key]
+ srt = edn
+ edn = srt + datasets[key]['test']['images'].shape[0]
+ test['images'][srt:edn,:,:,:] = datasets[key]['test']['images']
+ test['labels'][srt:edn,:] = datasets[key]['test']['labels']
+ test['domains'][srt:edn] = domain * np.ones((datasets[key]['test']['images'].shape[0],)).astype('float32')
+ return train, valid, test
+
+def source_target(datasets, sources, targets, unify_source = False):
+ N1 = len(sources.keys())
+ N_domain = N1 + len(targets.keys())
+ domain_idx = 0
+ for key in sources.keys():
+ sources[key] = domain_idx
+ domain_idx = domain_idx + 1
+ for key in targets.keys():
+ targets[key] = domain_idx
+ domain_idx = domain_idx + 1
+
+ source_train, source_valid, source_test = sets_concatenate(datasets, sources)
+ target_train, target_valid, target_test = sets_concatenate(datasets, targets)
+
+ if unify_source:
+ source_train['domains'] = to_one_hot(0 * source_train['domains'], 2)
+ source_valid['domains'] = to_one_hot(0 * source_valid['domains'], 2)
+ source_test['domains'] = to_one_hot(0 * source_test['domains'], 2)
+ target_train['domains'] = to_one_hot(0 * target_train['domains'] + 1, 2)
+ target_valid['domains'] = to_one_hot(0 * target_valid['domains'] + 1, 2)
+ target_test['domains'] = to_one_hot(0 * target_test['domains'] + 1, 2)
+ else:
+ source_train['domains'] = to_one_hot(source_train['domains'], N_domain)
+ source_valid['domains'] = to_one_hot(source_valid['domains'], N_domain)
+ source_test['domains'] = to_one_hot(source_test['domains'], N_domain)
+ target_train['domains'] = to_one_hot(target_train['domains'], N_domain)
+ target_valid['domains'] = to_one_hot(target_valid['domains'], N_domain)
+ target_test['domains'] = to_one_hot(target_test['domains'], N_domain)
+ return source_train, source_valid, source_test, target_train, target_valid, target_test
+
+def normalize(data):
+ image_mean = data - np.expand_dims(np.expand_dims(data.mean((1,2)),1),1)
+ image_std = np.sqrt((image_mean**2).mean((1,2))+1e-8)
+ return image_mean / np.expand_dims(np.expand_dims(image_std,1),1)
+
+def normalize_dataset(datasets, t = 'norm'):
+ if t is 'mean':
+ temp_data = []
+ for key in datasets.keys():
+ temp_data.append(datasets[key]['train']['images'])
+ temp_data = np.concatenate(temp_data)
+ image_mean = temp_data.mean((0, 1, 2))
+ image_mean = image_mean.astype('float32')
+ for key in datasets.keys():
+ datasets[key]['train']['images'] = (datasets[key]['train']['images'].astype('float32') - image_mean)/255.
+ datasets[key]['valid']['images'] = (datasets[key]['valid']['images'].astype('float32') - image_mean)/255.
+ datasets[key]['test']['images'] = (datasets[key]['test']['images'].astype('float32') - image_mean)/255.
+ elif t is 'standard':
+ for key in datasets.keys():
+ datasets[key]['train']['images'] = (datasets[key]['train']['images'].astype('float32'))/255.
+ datasets[key]['valid']['images'] = (datasets[key]['valid']['images'].astype('float32'))/255.
+ datasets[key]['test']['images'] = (datasets[key]['test']['images'].astype('float32'))/255.
+ elif t is 'none':
+ datasets = datasets
+ elif t is 'individual':
+ for key in datasets.keys():
+ temp_data = datasets[key]['train']['images']
+ image_mean = temp_data.mean((0, 1, 2))
+ image_mean = image_mean.astype('float32')
+ datasets[key]['train']['images'] = (datasets[key]['train']['images'].astype('float32') - image_mean)/255.
+ datasets[key]['valid']['images'] = (datasets[key]['valid']['images'].astype('float32') - image_mean)/255.
+ datasets[key]['test']['images'] = (datasets[key]['test']['images'].astype('float32') - image_mean)/255.
+ elif t is 'norm':
+ for key in datasets.keys():
+ if key =='mnist':
+ tmp_1 = datasets[key]['train']['images'][:(len(datasets[key]['train']['images']) // 2)]
+ tmp_2 = datasets[key]['train']['images'][(len(datasets[key]['train']['images']) // 2):]
+ datasets[key]['train']['images'] = np.concatenate([normalize(tmp_1),normalize(tmp_2)])
+ else:
+ datasets[key]['train']['images'] = normalize(datasets[key]['train']['images'])
+
+ datasets[key]['valid']['images'] = normalize(datasets[key]['valid']['images'])
+ datasets[key]['test']['images'] = normalize(datasets[key]['test']['images'])
+
+ return datasets
+
+def source_target_separate(datasets, sources, targets):
+ N1 = len(sources.keys())
+ N_domain = N1 + len(targets.keys())
+ domain_idx = 0
+ sets = {}
+ for key in sources.keys():
+ sources[key] = domain_idx
+ sets[key] = domain_idx
+ domain_idx = domain_idx + 1
+ for key in targets.keys():
+ targets[key] = domain_idx
+ sets[key] = domain_idx
+ domain_idx = domain_idx + 1
+ for key in datasets.keys():
+ datasets[key]['train']['domains'] = to_one_hot(sets[key] * np.ones((datasets[key]['train']['images'].shape[0],)).astype('float32'), N_domain)
+ datasets[key]['valid']['domains'] = to_one_hot(sets[key] * np.ones((datasets[key]['valid']['images'].shape[0],)).astype('float32'), N_domain)
+ datasets[key]['test']['domains'] = to_one_hot(sets[key] * np.ones((datasets[key]['test']['images'].shape[0],)).astype('float32'), N_domain)
+ return datasets
+
+def source_target_separate_baseline(datasets, sources, targets):
+ N1 = len(sources.keys())
+ N_domain = N1 + len(targets.keys())
+ domain_idx = 0
+ domains = {}
+ for key in sources.keys():
+ sources[key] = domain_idx
+ domains[key] = domain_idx
+ domain_idx = domain_idx + 1
+ for key in targets.keys():
+ targets[key] = domain_idx
+ domains[key] = domain_idx
+ for key in datasets.keys():
+ datasets[key]['train']['domains'] = to_one_hot(domains[key] * np.ones((datasets[key]['train']['images'].shape[0],)).astype('float32'), 2)
+ datasets[key]['valid']['domains'] = to_one_hot(domains[key] * np.ones((datasets[key]['valid']['images'].shape[0],)).astype('float32'), 2)
+ datasets[key]['test']['domains'] = to_one_hot(domains[key] * np.ones((datasets[key]['test']['images'].shape[0],)).astype('float32'), 2)
+ return datasets
+
+
+# if __name__ == '__main__':
+# data_dir = '../dataset/folder/MNIST_data'
+# if not os.path.exists(data_dir):
+# os.makedirs(data_dir)
+# mnist = input_data.read_data_sets(data_dir, one_hot=True)
+# mnist_train = (mnist.train.images.reshape(55000, 28, 28, 1) * 255).astype(np.uint8)
+# mnist_train = np.concatenate([mnist_train, mnist_train, mnist_train], 3)
+#
+# # mnist_inv = mnist_train * (-1) + 255
+# # mnist_train = np.concatenate([mnist_train, mnist_inv])
+# mnist_test = (mnist.test.images.reshape(10000, 28, 28, 1) * 255).astype(np.uint8)
+# mnist_test = np.concatenate([mnist_test, mnist_test, mnist_test], 3)
+# mnist_valid = (mnist.validation.images.reshape(5000, 28, 28, 1) * 255).astype(np.uint8)
+# mnist_valid = np.concatenate([mnist_valid, mnist_valid, mnist_valid], 3)
+# # datasets['mnist']['train'] = {'images': mnist_train, 'labels': np.concatenate([mnist.train.labels, mnist.train.labels])}
+# print(mnist_test.shape)
+# print(mnist)
+
+
+
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/main_DANN.py b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/main_DANN.py
new file mode 100644
index 000000000..0f63749a7
--- /dev/null
+++ b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/main_DANN.py
@@ -0,0 +1,383 @@
+from npu_bridge.npu_init import *
+import os
+import argparse
+
+import numpy as np
+#from sklearn.manifold import TSNE
+#import scipy.io
+
+import tensorflow as tf
+#import tensorflow.contrib.slim as slim
+
+from MNISTModel_DANN import MNISTModel_DANN
+import imageloader as dataloader
+import utils
+from tqdm import tqdm
+
+import moxing as mox
+import precision_tool.tf_config as npu_tf_config
+import precision_tool.lib.config as CONFIG
+
+os.environ['CUDA_VISIBLE_DEVICES'] = '0' #0、1使用GPU的编号 此处由0改为1
+
+gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.25)
+
+parser = argparse.ArgumentParser(description="Domain Adaptation Training")
+parser.add_argument("--data_url", type=str, default="obs://cann-id1254/dataset/", help="path to dataset folder") #注意路径
+parser.add_argument("--train_url", type=str, default="./output")
+parser.add_argument("--save_path", type=str, default="obs://cann-id1254/savef/", help="path to save experiment output") #注意路径
+parser.add_argument("--source", type=str, default="svhn", help="specify source dataset")
+parser.add_argument("--target", type=str, default="mnist", help="specify target dataset")
+
+
+args, unparsed = parser.parse_known_args()
+
+
+data_path = args.data_url
+save_path = args.save_path
+batch_size = 64 # 64
+num_steps = 5000 # 原来是15000
+epsilon = 0.5
+M = 0.1
+num_test_steps = 5000
+valid_steps = 100
+
+# 在ModelArts容器创建数据存放目录
+data_dir = "/cache/dataset/"
+os.makedirs(data_dir)
+print("已创建!!!!!!!!!!!!11")
+
+savePath= "/cache/savePath/"
+os.makedirs(savePath)
+print("已创建!!!!!!!!!!!!11")
+
+model_dir = "/cache/result"
+os.makedirs(model_dir)
+
+# OBS数据拷贝到ModelArts容器内
+mox.file.copy_parallel(data_path, data_dir)
+
+#由于把桶中数据拷贝到modelArts上了 那么下面数据加载的地址也由data_path变为data_dir
+datasets = dataloader.load_datasets(data_dir,{args.source:1,args.target:1}) #data_path变为data_dir
+# d_train = datasets['mnist']['train'].get('images')
+# print("----------------------------------",len(d_train))
+# print("----------------------------------",len(d_train))
+# print("----------------------------------",len(d_train))
+# d1 = datasets.keys()
+# print(d1)
+# d_m = datasets.get('mnist')
+# print("mnist",d_m.keys())
+# d_m_train_d = d_m.get('train').get('images')
+# print("mnist train,test,valid",d_m_train_d.shape) #,d_m.get('test'),d_m.get('valid')
+# mnist_train_samples = d_m_train_d.shape[0]
+# end = mnist_train_samples // batch_size * batch_size
+# print("end sample ",end)
+# d_m_train_d = d_m_train_d[:end]
+# d_m_train_l =
+# print(d_m_train_d.shape)
+
+# d2 = datasets.get('svhn')
+# d3 = d2.get('train')
+# d4 = d3['images']
+# print(d2.keys())
+# print(d3.keys())
+# print(d4.shape)
+
+
+# dataset = dataloader.normalize_dataset(dataset)
+sources = {args.source:1}
+targets = {args.target:1}
+description = utils.description(sources, targets)
+source_train, source_valid, source_test, target_train, target_valid, target_test = dataloader.source_target(datasets, sources, targets, unify_source = True)
+
+options = {}
+options['sample_shape'] = (28,28,3)
+options['num_domains'] = 2
+options['num_targets'] = 1
+options['num_labels'] = 10
+options['batch_size'] = batch_size
+options['G_iter'] = 1
+options['D_iter'] = 1
+options['ef_dim'] = 32
+options['latent_dim'] = 128
+options['t_idx'] = np.argmax(target_test['domains'][0])
+options['source_num'] = batch_size
+options['target_num'] = batch_size
+options['reg_disc'] = 0.1
+options['reg_con'] = 0.1
+options['lr_g'] = 0.001
+options['lr_d'] = 0.001
+options['reg_tgt'] = 1.0
+description = utils.description(sources, targets)
+description = description + '_DANN_' + str(options['reg_disc'])
+
+
+tf.reset_default_graph()
+graph = tf.get_default_graph()
+model = MNISTModel_DANN(options)
+
+# 浮点异常检测
+# config = npu_config_proto(config_proto=tf.ConfigProto(gpu_options=gpu_options))
+# config = npu_tf_config.session_dump_config(config, action='overflow')
+# sess = tf.Session(graph = graph, config=config)
+
+# 关闭全部融合规则
+# config = npu_config_proto(config_proto=tf.ConfigProto(gpu_options=gpu_options))
+# config = npu_tf_config.session_dump_config(config, action='fusion_off')
+# sess = tf.Session(graph = graph,config=config)
+
+# ModelArts训练 创建临时的性能数据目录
+profiling_dir = "/cache/profiling"
+os.makedirs(profiling_dir)
+
+# 混合精度 单使用混合精度,精度达标与V100相同 Job:5-20-10-56
+config_proto = tf.ConfigProto(gpu_options=gpu_options)
+custom_op = config_proto.graph_options.rewrite_options.custom_optimizers.add()
+custom_op.name = 'NpuOptimizer'
+# 开启混合精度
+custom_op.parameter_map["precision_mode"].s = tf.compat.as_bytes("allow_mix_precision")
+# 开启profiling采集 Job:5-20-19-16
+custom_op.parameter_map["profiling_mode"].b = True
+# # 仅采集任务轨迹数据
+# custom_op.parameter_map["profiling_options"].s = tf.compat.as_bytes('{"output":"/cache/profiling","task_trace":"on"}')
+
+# 采集任务轨迹数据和迭代轨迹数据。可先仅采集任务轨迹数据,如果仍然无法分析到具体问题,可再采集迭代轨迹数据
+custom_op.parameter_map["profiling_options"].s = tf.compat.as_bytes('{"output":"/cache/profiling","task_trace":"on","training_trace":"on","fp_point":"resnet_model/conv2d/Conv2Dresnet_model/batch_normalization/FusedBatchNormV3_Reduce","bp_point":"gradients/AddN_70"}')
+
+config = npu_config_proto(config_proto=config_proto)
+sess = tf.Session(graph = graph,config=config)
+
+# 单使用LossScale Job:5-20-15-05
+
+# # 混合精度 + LossScale + 溢出数据采集 Job:5-20-16-56
+# # 1.混合精度
+# config_proto = tf.ConfigProto(gpu_options=gpu_options)
+# custom_op = config_proto.graph_options.rewrite_options.custom_optimizers.add()
+# custom_op.name = 'NpuOptimizer'
+# custom_op.parameter_map["precision_mode"].s = tf.compat.as_bytes("allow_mix_precision")
+# # 2.溢出数据采集
+# overflow_data_dir = "/cache/overflow"
+# os.makedirs(overflow_data_dir)
+# # dump_path:dump数据存放路径,该参数指定的目录需要在启动训练的环境上(容器或Host侧)提前创建且确保安装时配置的运行用户具有读写权限
+# custom_op.parameter_map["dump_path"].s = tf.compat.as_bytes(overflow_data_dir)
+# # enable_dump_debug:是否开启溢出检测功能
+# custom_op.parameter_map["enable_dump_debug"].b = True
+# # dump_debug_mode:溢出检测模式,取值:all/aicore_overflow/atomic_overflow
+# custom_op.parameter_map["dump_debug_mode"].s = tf.compat.as_bytes("all")
+# config = npu_config_proto(config_proto=config_proto)
+# sess = tf.Session(graph = graph,config=config)
+
+# 源代码迁移后的sess
+#sess = tf.Session(graph = graph, config=npu_config_proto(config_proto=tf.ConfigProto(gpu_options=gpu_options)))
+
+tf.global_variables_initializer().run(session = sess)
+
+record = []
+
+gen_source_batch = utils.batch_generator([source_train['images'],
+ source_train['labels'],
+ source_train['domains']], batch_size)
+
+print("gen_source_batch ",gen_source_batch)
+gen_target_batch = utils.batch_generator([target_train['images'],
+ target_train['labels'],
+ target_train['domains']], batch_size)
+print("gen_targe_batch ",gen_target_batch)
+gen_source_batch_valid = utils.batch_generator([np.concatenate([source_valid['images'], source_test['images']]),
+ np.concatenate([source_valid['labels'], source_test['labels']]),
+ np.concatenate([source_valid['domains'], source_test['domains']])],
+ batch_size)
+print("gen_source_batch_valid ",gen_source_batch_valid)
+gen_target_batch_valid = utils.batch_generator([np.concatenate([target_valid['images'], target_test['images']]),
+ np.concatenate([target_valid['labels'], target_test['labels']]),
+ np.concatenate([target_valid['domains'], target_test['domains']])],
+ batch_size)
+print("gen_target_batch_valid",gen_target_batch_valid)
+# source_data_valid = np.concatenate([source_valid['images'], source_test['images']])
+# target_data_valid = np.concatenate([target_valid['images'], target_test['images']])
+# source_label_valid = np.concatenate([source_valid['labels'], source_test['labels']])
+#
+# print("source_data_valid ",source_data_valid.shape)
+# print("target_data_valid ",target_data_valid.shape)
+# print("source_label_valid ",source_label_valid.shape)
+
+#save_path = './Result/' + description + '/'
+# print("save_path",save_path)
+
+# #创建保存文件夹
+# if not os.path.exists(save_path):
+# print("Creating ",save_path)
+# os.makedirs(save_path)
+
+# save_path = os.path.join(savePath, description)
+# print("save_path",save_path)
+# if not os.path.exists(save_path):
+# print("Creating!!!")
+# os.mkdir(save_path)
+
+def compute_MMD(H_fake, H_real, sigma_range=[5]):
+
+ min_len = min([len(H_real),len(H_fake)])
+ h_real = H_real[:min_len]
+ h_fake = H_fake[:min_len]
+
+ dividend = 1
+ dist_x, dist_y = h_fake/dividend, h_real/dividend
+ x_sq = np.expand_dims(np.sum(dist_x**2, axis=1), 1) # 64*1
+ y_sq = np.expand_dims(np.sum(dist_y**2, axis=1), 1) # 64*1
+ dist_x_T = np.transpose(dist_x)
+ dist_y_T = np.transpose(dist_y)
+ x_sq_T = np.transpose(x_sq)
+ y_sq_T = np.transpose(y_sq)
+
+ tempxx = -2*np.matmul(dist_x,dist_x_T) + x_sq + x_sq_T # (xi -xj)**2
+ tempxy = -2*np.matmul(dist_x,dist_y_T) + x_sq + y_sq_T # (xi -yj)**2
+ tempyy = -2*np.matmul(dist_y,dist_y_T) + y_sq + y_sq_T # (yi -yj)**2
+
+
+ for sigma in sigma_range:
+ kxx, kxy, kyy = 0, 0, 0
+ kxx += np.mean(np.exp(-tempxx/2/(sigma**2)))
+ kxy += np.mean(np.exp(-tempxy/2/(sigma**2)))
+ kyy += np.mean(np.exp(-tempyy/2/(sigma**2)))
+
+ gan_cost_g = np.sqrt(kxx + kyy - 2*kxy)
+ return gan_cost_g
+
+best_valid = -1
+best_acc = -1
+
+best_src_acc = -1
+best_src = -1
+
+best_bound_acc = -1
+best_bound = 100000
+
+best_iw_acc = -1
+best_iw = 100000
+
+best_ben_acc = -1
+best_ben = 100000
+
+#output_file = save_path+'acc.txt'
+output_file = savePath+description+"acc.txt"
+print('Training...')
+with open(output_file, 'w') as fout:
+ for i in tqdm(range(1, num_steps + 1)):
+
+ #print("step ",i)
+ # Adaptation param and learning rate schedule as described in the paper
+
+ X0, y0, d0 = gen_source_batch.__next__() # python2.x的g.next()函数已经更名为g.__next__()或next(g)也能达到相同效果。
+ X1, y1, d1 = gen_target_batch.__next__()
+
+ X = np.concatenate([X0, X1], axis = 0)
+ #print("Input X ",X.shape)
+ d = np.concatenate([d0, d1], axis = 0)
+ #print("Input d ",d.shape)
+ #print("Input y0 ",y0.shape)
+ for j in range(options['D_iter']):
+ # Update Adversary
+ _, mi_loss = \
+ sess.run([model.train_mi_ops, model.bound],
+ feed_dict={model.X:X, model.train: True})
+
+ for j in range(options['G_iter']):
+ # Update Feature Extractor & Lable Predictor np.array([0,0,1,1]).astype('float32')
+ _, tploss, tp_acc = \
+ sess.run([model.train_context_ops, model.y_loss, model.y_acc],
+ feed_dict={model.X: X, model.y: y0, model.train: True})
+
+ for j in range(options['G_iter']):
+ # Update Feature Extractor & Lable Predictor np.array([0,0,1,1]).astype('float32')
+ _, td_loss, td_acc = \
+ sess.run([model.train_domain_ops, model.d_loss, model.d_acc],
+ feed_dict={model.X: X, model.domains: d, model.train: True})
+
+ if i % 10 == 0:
+ print ('%s iter %d mi_loss: %.4f d_loss: %.4f p_acc: %.4f' % \
+ (description, i, mi_loss, td_loss, tp_acc))
+
+ '''
+ if i % valid_steps == 0:
+ # Calculate bound
+ # init_new_vars_op = tf.initialize_variables(model.domain_test_vars)
+ # sess.run(init_new_vars_op)
+
+ # for s in range(num_test_steps):
+ # X0_test, y0_test, d0_test = gen_source_batch.next()
+ # X1_test, y1_test, d1_test = gen_target_batch.next()
+
+ # X_test = np.concatenate([X0_test, X1_test], axis = 0)
+ # d_test = np.concatenate([d0_test, d1_test], axis = 0)
+
+ # _ = sess.run(model.test_domain_ops, feed_dict={model.X:X_test,
+ # model.domains: d_test, model.train: False})
+
+ # source_pq = utils.get_data_pq(sess, model, source_data_valid)
+ # target_pq = utils.get_data_pq(sess, model, target_data_valid)
+
+ # st_ratio = float(source_train['images'].shape[0]) / target_train['images'].shape[0]
+
+ # src_qp = source_pq[:,1] / source_pq[:,0] * st_ratio
+ # tgt_qp = target_pq[:,1] / target_pq[:,0] * st_ratio
+
+ # w_source_pq = np.copy(src_qp)
+ # w_source_pq[source_pq[:,0]=1),(target_pq[:,0]=1),(source_pq[:,0] best_valid:
+ best_params = utils.get_params(sess)
+ best_valid = target_valid_acc
+ best_acc = target_test_acc
+
+ labd = np.concatenate((source_test['domains'], target_test['domains']), axis = 0)
+ print ('src valid: %.4f tgt valid: %.4f tgt test: %.4f best: %.4f ' % \
+ (source_valid_acc, target_valid_acc, target_test_acc, best_acc) )
+
+ acc_store = '%.4f, %.4f, %.4f, %.4f \n'%(source_valid_acc, target_valid_acc, target_test_acc, best_acc)
+ fout.write(acc_store)
+ '''
+
+#训练结束后,将ModelArts容器内的训练输出拷贝到OBS
+mox.file.copy_parallel(savePath, save_path)
+
+# 训练结束后,将ModelArts容器内的训练输出拷贝到OBS
+# mox.file.copy_parallel(model_dir, args.train_url)
+# mox.file.copy_parallel(CONFIG.ROOT_DIR, args.train_url) #浮点异常数据保存至OBS
+# mox.file.copy_parallel(overflow_data_dir,args.train_url) #溢出数据保存至OBS
+mox.file.copy_parallel(profiling_dir, args.train_url) #性能数据保存至OBS
\ No newline at end of file
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/utils.py b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/utils.py
new file mode 100644
index 000000000..1c44d36ec
--- /dev/null
+++ b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/utils.py
@@ -0,0 +1,364 @@
+from npu_bridge.npu_init import *
+import tensorflow as tf
+import numpy as np
+import matplotlib.patches as mpatches
+# import matplotlib.pyplot as plt
+from mpl_toolkits.axes_grid1 import ImageGrid
+from sklearn import metrics
+import scipy
+# Model construction utilities below adapted from
+# https://www.tensorflow.org/versions/r0.8/tutorials/mnist/pros/index.html#deep-mnist-for-experts
+
+def get_params(sess):
+ variables = tf.trainable_variables()
+ params = {}
+ for i in range(len(variables)):
+ name = variables[i].name
+ params[name] = sess.run(variables[i])
+ return params
+
+
+def to_one_hot(x, N = -1):
+ x = x.astype('int32')
+ if np.min(x) !=0 and N == -1:
+ x = x - np.min(x)
+ x = x.reshape(-1)
+ if N == -1:
+ N = np.max(x) + 1
+ label = np.zeros((x.shape[0],N))
+ idx = range(x.shape[0])
+ label[idx,x] = 1
+ return label.astype('float32')
+
+def image_mean(x):
+ x_mean = x.mean((0, 1, 2))
+ return x_mean
+
+def shape(tensor):
+ """
+ Get the shape of a tensor. This is a compile-time operation,
+ meaning that it runs when building the graph, not running it.
+ This means that it cannot know the shape of any placeholders
+ or variables with shape determined by feed_dict.
+ """
+ return tuple([d.value for d in tensor.get_shape()])
+
+
+def fully_connected_layer(in_tensor, out_units):
+ """
+ Add a fully connected layer to the default graph, taking as input `in_tensor`, and
+ creating a hidden layer of `out_units` neurons. This should be done in a new variable
+ scope. Creates variables W and b, and computes activation_function(in * W + b).
+ """
+ _, num_features = shape(in_tensor)
+ weights = tf.get_variable(name = "weights", shape = [num_features, out_units], initializer = tf.truncated_normal_initializer(stddev=0.1))
+ biases = tf.get_variable( name = "biases", shape = [out_units], initializer=tf.constant_initializer(0.1))
+ return tf.matmul(in_tensor, weights) + biases
+
+
+def conv2d(in_tensor, filter_shape, out_channels):
+ """
+ Creates a conv2d layer. The input image (whish should already be shaped like an image,
+ a 4D tensor [N, W, H, C]) is convolved with `out_channels` filters, each with shape
+ `filter_shape` (a width and height). The ReLU activation function is used on the
+ output of the convolution.
+ """
+ _, _, _, channels = shape(in_tensor)
+ W_shape = filter_shape + [channels, out_channels]
+
+ # create variables
+ weights = tf.get_variable(name = "weights", shape = W_shape, initializer=tf.truncated_normal_initializer(stddev=0.1))
+ biases = tf.get_variable(name = "biases", shape = [out_channels], initializer= tf.constant_initializer(0.1))
+ conv = tf.nn.conv2d( in_tensor, weights, strides=[1, 1, 1, 1], padding='SAME')
+ h_conv = conv + biases
+ return h_conv
+
+
+#def conv1d(in_tensor, filter_shape, out_channels):
+# _, _, channels = shape(in_tensor)
+# W_shape = [filter_shape, channels, out_channels]
+#
+# W = tf.truncated_normal(W_shape, dtype = tf.float32, stddev = 0.1)
+# weights = tf.Variable(W, name = "weights")
+# b = tf.truncated_normal([out_channels], dtype = tf.float32, stddev = 0.1)
+# biases = tf.Variable(b, name = "biases")
+# conv = tf.nn.conv1d(in_tensor, weights, stride=1, padding='SAME')
+# h_conv = conv + biases
+# return h_conv
+
+def vars_from_scopes(scopes):
+ """
+ Returns list of all variables from all listed scopes. Operates within the current scope,
+ so if current scope is "scope1", then passing in ["weights", "biases"] will find
+ all variables in scopes "scope1/weights" and "scope1/biases".
+ """
+ current_scope = tf.get_variable_scope().name
+ #print(current_scope)
+ if current_scope != '':
+ scopes = [current_scope + '/' + scope for scope in scopes]
+ var = []
+ for scope in scopes:
+ for v in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=scope):
+ var.append(v)
+ return var
+
+def tfvar2str(tf_vars):
+ names = []
+ for i in range(len(tf_vars)):
+ names.append(tf_vars[i].name)
+ return names
+
+
+def shuffle_aligned_list(data):
+ """Shuffle arrays in a list by shuffling each array identically."""
+ num = data[0].shape[0]
+ p = np.random.permutation(num)
+ return [d[p] for d in data]
+
+def normalize_images(img_batch):
+ fl = tf.cast(img_batch, tf.float32)
+ return tf.map_fn(tf.image.per_image_standardization, fl)
+
+
+def batch_generator(data, batch_size, shuffle=True):
+ """Generate batches of data.
+
+ Given a list of array-like objects, generate batches of a given
+ size by yielding a list of array-like objects corresponding to the
+ same slice of each input.
+ """
+ if shuffle:
+ data = shuffle_aligned_list(data)
+
+ batch_count = 0
+ while True:
+ if batch_count * batch_size + batch_size >= len(data[0]):
+ batch_count = 0
+
+ if shuffle:
+ data = shuffle_aligned_list(data)
+
+ start = batch_count * batch_size
+ end = start + batch_size
+ batch_count += 1
+ yield [d[start:end] for d in data]
+
+
+def get_auc(predictions, labels):
+ fpr, tpr, thresholds = metrics.roc_curve(np.squeeze(labels).astype('float32'), np.squeeze(predictions).astype('float32'), pos_label=2)
+ return metrics.auc(fpr, tpr)
+
+
+def predictor_accuracy(predictions, labels):
+ """
+ Returns a number in [0, 1] indicating the percentage of `labels` predicted
+ correctly (i.e., assigned max logit) by `predictions`.
+ """
+ return tf.reduce_mean(tf.cast(tf.equal(tf.argmax(predictions, 1), tf.argmax(labels, 1)),tf.float32))
+
+def get_Wasser_distance(sess, model, data, L = 1, batch = 1024):
+ N = data.shape[0]
+ n = np.ceil(N/batch).astype(np.int32)
+ Wasser = np.zeros((N,L))
+ if L == 1:
+ Wasser = Wasser.reshape(-1)
+ l = np.float32(1.)
+ srt = 0
+ edn = 0
+ for i in range(n + 1):
+ srt = edn
+ edn = min(N, srt + batch - 1)
+ X = data[srt:edn]
+ if L == 1:
+ Wasser[srt:edn] = sess.run(model.d_pred,feed_dict={model.X: X.astype('float32'), model.lr_g:l, model.train: False})
+ else:
+ Wasser[srt:edn,:] = sess.run(model.d_pred,feed_dict={model.X: X.astype('float32'), model.lr_g:l, model.train: False})
+ return Wasser
+
+def get_data_pred(sess, model, obj_acc, data, labels, batch = 1024):
+ N = data.shape[0]
+ n = np.ceil(N/batch).astype(np.int32)
+ if obj_acc == 'feature':
+ temp = sess.run(model.features,feed_dict={model.X: data[0:2].astype('float32'), model.train: False})
+ pred = np.zeros((data.shape[0],temp.shape[1])).astype('float32')
+ else:
+ pred= np.zeros(labels.shape).astype('float32')
+ srt = 0
+ edn = 0
+ for i in range(n + 1):
+ srt = edn
+ edn = min(N, srt + batch)
+ X = data[srt:edn]
+ if obj_acc is 'y':
+ pred[srt:edn,:] = sess.run(model.y_pred,feed_dict={model.X: X.astype('float32'), model.train: False})
+ elif obj_acc is 'd':
+ if i == 0:
+ temp = sess.run(model.d_pred,feed_dict={model.X: X.astype('float32'), model.train: False})
+ pred= np.zeros((labels.shape[0], temp.shape[1])).astype('float32')
+ pred[srt:edn,:]= sess.run(model.d_pred,feed_dict={model.X: X.astype('float32'), model.train: False})
+ elif obj_acc is 'feature':
+ pred[srt:edn] = sess.run(model.features,feed_dict={model.X: X.astype('float32'), model.train: False})
+ return pred
+
+def get_data_pq(sess, model, data, batch = 1024):
+ N = data.shape[0]
+ n = np.ceil(N/batch).astype(np.int32)
+
+ z_pq = np.zeros([N, model.num_domains]).astype('float32')
+
+ srt = 0
+ edn = 0
+ for i in range(n + 1):
+ srt = edn
+ edn = min(N, srt + batch)
+ X = data[srt:edn]
+
+ z_pq[srt:edn,:] = sess.run(model.test_pq,feed_dict={model.X: X.astype('float32'), model.train: False})
+
+ return z_pq
+
+def get_feature(sess, model, data, batch = 1024):
+ N = data.shape[0]
+ n = np.ceil(N/batch).astype(np.int32)
+
+ feature = np.zeros([N, model.feature_dim]).astype('float32')
+
+ srt = 0
+ edn = 0
+ for i in range(n + 1):
+ srt = edn
+ edn = min(N, srt + batch)
+ X = data[srt:edn]
+
+ feature[srt:edn,:] = sess.run(model.features,feed_dict={model.X: X.astype('float32'), model.train: False})
+
+ return feature
+
+def get_y_loss(sess, model, data, label, batch = 1024):
+ N = data.shape[0]
+ n = np.ceil(N/batch).astype(np.int32)
+
+ y_loss = np.zeros(N).astype('float32')
+
+ srt = 0
+ edn = 0
+ for i in range(n + 1):
+ srt = edn
+ edn = min(N, srt + batch)
+ X = data[srt:edn]
+ y = label[srt:edn]
+
+ y_loss[srt:edn] = sess.run(model.y_loss,feed_dict={model.X: X.astype('float32'), model.y: y, model.train: False})
+
+ return y_loss
+
+
+def get_acc(pred, label):
+ if len(pred.shape) > 1:
+ pred = np.argmax(pred,axis = 1)
+ if len(label.shape) > 1:
+ label = np.argmax(label, axis = 1)
+ #pdb.set_trace()
+ acc = (pred == label).sum().astype('float32')
+ return acc/label.shape[0]
+
+
+# def imshow_grid(images, shape=[2, 8]):
+# """Plot images in a grid of a given shape."""
+# fig = plt.figure(1)
+# grid = ImageGrid(fig, 111, nrows_ncols=shape, axes_pad=0.05)
+
+# size = shape[0] * shape[1]
+# for i in range(size):
+# grid[i].axis('off')
+# grid[i].imshow(images[i]) # The AxesGrid object work as a list of axes.
+
+# plt.show()
+
+def dic2list(sources, targets):
+ names_dic = {}
+ for key in sources:
+ names_dic[sources[key]] = key
+ for key in targets:
+ names_dic[targets[key]] = key
+ names = []
+ for i in range(len(names_dic)):
+ names.append(names_dic[i])
+ return names
+
+# def plot_embedding(X, y, d, names, title=None):
+# """Plot an embedding X with the class label y colored by the domain d."""
+
+# x_min, x_max = np.min(X, 0), np.max(X, 0)
+# X = (X - x_min) / (x_max - x_min)
+# colors = np.array([[0.6,0.4,1.0,1.0],
+# [1.0,0.1,1.0,1.0],
+# [0.6,1.0,0.6,1.0],
+# [0.1,0.4,0.4,1.0],
+# [0.4,0.6,0.1,1.0],
+# [0.4,0.4,0.4,0.4]]
+# )
+# # Plot colors numbers
+# plt.figure(figsize=(10,10))
+# ax = plt.subplot(111)
+# for i in range(X.shape[0]):
+# # plot colored number
+# plt.text(X[i, 0], X[i, 1], str(y[i]),
+# color=colors[d[i]],
+# fontdict={'weight': 'bold', 'size': 9})
+
+# plt.xticks([]), plt.yticks([])
+# patches = []
+# for i in range(max(d)+1):
+# patches.append( mpatches.Patch(color=colors[i], label=names[i]))
+# plt.legend(handles=patches)
+# if title is not None:
+# plt.title(title)
+
+def load_plot(file_name):
+ mat = scipy.io.loadmat(file_name)
+ dann_tsne = mat['dann_tsne']
+ test_labels = mat['test_labels']
+ test_domains = mat['test_domains']
+ names = mat['names']
+ plot_embedding(dann_tsne, test_labels.argmax(1), test_domains.argmax(1), names, 'Domain Adaptation')
+
+
+
+def softmax(x):
+ """Compute softmax values for each sets of scores in x."""
+ e_x = np.exp(x - np.max(x))
+ return e_x / e_x.sum(axis=0)
+
+def norm_matrix(X, l):
+ Y = np.zeros(X.shape);
+ for i in range(X.shape[0]):
+ Y[i] = X[i]/np.linalg.norm(X[i],l)
+ return Y
+
+
+def description(sources, targets):
+ source_names = sources.keys()
+ #print(source_names)
+ target_names = targets.keys()
+ N = min(len(source_names), 4)
+
+ source_names = list(source_names) #将键转为list列表后可进行索引操作
+ target_names = list(target_names)
+ description = source_names[0] #源代码是source_names[0] 在python3中keys不允许切片,先转List再切片就好了
+ for i in range(1,N):
+ description = description + '_' + source_names[i]
+ description = description + '-' + target_names[0]
+ return description
+
+def channel_dropout(X, p):
+ if p == 0:
+ return X
+ mask = tf.random_uniform(shape = [tf.shape(X)[0], tf.shape(X)[2]])
+ mask = mask + 1 - p
+ mask = tf.floor(mask)
+ dropout = tf.expand_dims(mask,axis = 1) * X/(1-p)
+ return dropout
+
+def sigmoid(x):
+ return 1 / (1 + np.exp(-x))
--
Gitee
From a7714b600a3630c279e8af98f20bb8e3eee18876 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BF=83=E6=80=9D=E7=BC=A0=E7=BB=B5?= <869083529@qq.com>
Date: Wed, 20 Jul 2022 09:40:29 +0000
Subject: [PATCH 09/16] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20Te?=
=?UTF-8?q?nsorFlow/contrib/cv/club/CLUB=5Ftf=5Fwubo9826/MI=5FDA/.keep?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/.keep | 0
1 file changed, 0 insertions(+), 0 deletions(-)
delete mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/.keep
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/.keep b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MI_DA/.keep
deleted file mode 100644
index e69de29bb..000000000
--
Gitee
From e3cb319b6e097bd9cb7072ae5884c43ec5fe9550 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BF=83=E6=80=9D=E7=BC=A0=E7=BB=B5?= <869083529@qq.com>
Date: Wed, 20 Jul 2022 09:40:42 +0000
Subject: [PATCH 10/16] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20Te?=
=?UTF-8?q?nsorFlow/contrib/cv/club/CLUB=5Ftf=5Fwubo9826/.keep?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/.keep | 0
1 file changed, 0 insertions(+), 0 deletions(-)
delete mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/.keep
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/.keep b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/.keep
deleted file mode 100644
index e69de29bb..000000000
--
Gitee
From b7f6c165a0a3e14b246e9acd69d10fe28379b61f Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BF=83=E6=80=9D=E7=BC=A0=E7=BB=B5?= <869083529@qq.com>
Date: Wed, 20 Jul 2022 09:40:52 +0000
Subject: [PATCH 11/16] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20Te?=
=?UTF-8?q?nsorFlow/contrib/cv/club/CLUB=5Ftf=5Fwubo9826/imageloader.py?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../cv/club/CLUB_tf_wubo9826/imageloader.py | 431 ------------------
1 file changed, 431 deletions(-)
delete mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/imageloader.py
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/imageloader.py b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/imageloader.py
deleted file mode 100644
index 0980af08d..000000000
--- a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/imageloader.py
+++ /dev/null
@@ -1,431 +0,0 @@
-from npu_bridge.npu_init import *
-import numpy as np
-import pickle
-from tensorflow.examples.tutorials.mnist import input_data
-import scipy.io
-from utils import to_one_hot
-import pdb
-import cv2
-import os
-#from scipy.misc import imsave
-
-def load_datasets(data_dir = './', sets={'mnist':1, 'svhn':1, 'mnistm':1, 'usps':1}):
- datasets = {}
- for key in sets.keys():
- datasets[key] = {}
- if sets.get('mnist'): #原代码是sets.has_key('mnist'),不支持python3.X
- mnist = input_data.read_data_sets(data_dir + 'MNIST_data', one_hot=True)
- mnist_train = (mnist.train.images.reshape(55000, 28, 28, 1) * 255).astype(np.uint8)
- mnist_train = np.concatenate([mnist_train, mnist_train, mnist_train], 3)
-
- # mnist_inv = mnist_train * (-1) + 255
- # mnist_train = np.concatenate([mnist_train, mnist_inv])
- mnist_test = (mnist.test.images.reshape(10000, 28, 28, 1) * 255).astype(np.uint8)
- mnist_test = np.concatenate([mnist_test, mnist_test, mnist_test], 3)
- mnist_valid = (mnist.validation.images.reshape(5000, 28, 28, 1) * 255).astype(np.uint8)
- mnist_valid = np.concatenate([mnist_valid, mnist_valid, mnist_valid], 3)
- # dataset['mnist']['train'] = {'images': mnist_train, 'labels': np.concatenate([mnist.train.labels, mnist.train.labels])}
- datasets['mnist']['train'] = {'images': mnist_train, 'labels': mnist.train.labels}
- datasets['mnist']['test'] = {'images': mnist_test, 'labels': mnist.test.labels}
- datasets['mnist']['valid'] = {'images': mnist_valid, 'labels': mnist.validation.labels}
-
-
- if sets.get('mnist32'):
- mnist = input_data.read_data_sets(data_dir + 'MNIST_data', one_hot=True)
- mnist_train = (mnist.train.images.reshape(55000, 28, 28, 1) * 255).astype(np.uint8)
- mnist_train = np.concatenate([mnist_train, mnist_train, mnist_train], 3)
- mnist_test = (mnist.test.images.reshape(10000, 28, 28, 1) * 255).astype(np.uint8)
- mnist_test = np.concatenate([mnist_test, mnist_test, mnist_test], 3)
- mnist_valid = (mnist.validation.images.reshape(5000, 28, 28, 1) * 255).astype(np.uint8)
- mnist_valid = np.concatenate([mnist_valid, mnist_valid, mnist_valid], 3)
-
- mnist_train = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in mnist_train]
- mnist_train = np.concatenate(mnist_train)
- mnist_test = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in mnist_test]
- mnist_test = np.concatenate(mnist_test)
- mnist_valid = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in mnist_valid]
- mnist_valid = np.concatenate(mnist_valid)
-
- datasets['mnist32']['train'] = {'images': mnist_train, 'labels': mnist.train.labels}
- datasets['mnist32']['test'] = {'images': mnist_test, 'labels': mnist.test.labels}
- datasets['mnist32']['valid'] = {'images': mnist_valid, 'labels': mnist.validation.labels}
-
- # if sets.has_key('svhn'):
- # svhn = scipy.io.loadmat(data_dir + 'SVHN/svhn.mat')
- # svhn_train = svhn['train'].astype(np.uint8)
- # svhn_labtrain = svhn['labtrain'].astype(np.int32)
- # svhn_valid = svhn['val'].astype(np.uint8)
- # svhn_labval= svhn['labval'].astype(np.int32)
- # svhn_test = svhn['test'].astype(np.uint8)
- # svhn_labtest =svhn['labtest'].astype(np.int32)
- # dataset['svhn']['train'] = {'images': svhn_train, 'labels': svhn_labtrain}
- # dataset['svhn']['test'] = {'images': svhn_test, 'labels': svhn_labtest}
- # dataset['svhn']['valid'] = {'images': svhn_valid, 'labels': svhn_labval}
-
- if sets.get('svhn'):
- svhn_train = scipy.io.loadmat(data_dir + 'SVHN/train_32x32.mat')
- svhn_train_data = svhn_train['X'].transpose((3,0,1,2)).astype(np.uint8)
-
- svhn_train_label = svhn_train['y'] + 1
- svhn_train_label[svhn_train_label > 10] = 1
- svhn_train_label = to_one_hot(svhn_train_label)
-
- svhn_valid_data = svhn_train_data[-5000:]
- svhn_train_data = svhn_train_data[:-5000]
-
- svhn_valid_label = svhn_train_label[-5000:]
- svhn_train_label = svhn_train_label[:-5000]
-
- svhn_test = scipy.io.loadmat(data_dir + 'SVHN/test_32x32.mat')
- svhn_test_data = svhn_test['X'].transpose((3,0,1,2)).astype(np.uint8)
-
- svhn_test_label = svhn_test['y'] + 1
- svhn_test_label[svhn_test_label > 10] = 1
- svhn_test_label = to_one_hot(svhn_test_label)
-
- # svhn_train_data = [np.expand_dims(cv2.resize(x, dsize=(28,28)), 0) for x in svhn_train_data]
- # svhn_train_data = np.concatenate(svhn_train_data)
- # svhn_test_data = [np.expand_dims(cv2.resize(x, dsize=(28,28)), 0) for x in svhn_test_data]
- # svhn_test_data = np.concatenate(svhn_test_data)
- # svhn_valid_data = [np.expand_dims(cv2.resize(x, dsize=(28,28)), 0) for x in svhn_valid_data]
- # svhn_valid_data = np.concatenate(svhn_valid_data)
-
- svhn_train_data = svhn_train_data[:,2:30,2:30,:]
- svhn_test_data = svhn_test_data[:,2:30,2:30,:]
- svhn_valid_data = svhn_valid_data[:,2:30,2:30,:]
-
-
-
- datasets['svhn']['train'] = {'images': svhn_train_data, 'labels': svhn_train_label}
- datasets['svhn']['test'] = {'images': svhn_test_data, 'labels': svhn_test_label}
- datasets['svhn']['valid'] = {'images': svhn_valid_data, 'labels': svhn_valid_label}
-
- if sets.get('mnistm'):
- if 'mnist' not in locals():
- mnist = input_data.read_data_sets(data_dir + 'MNIST_data', one_hot=True)
- mnistm = pickle.load(open(data_dir + 'MNISTM/mnistm_data.pkl', 'rb'))
- mnistm_train = mnistm['train']
- mnistm_test = mnistm['test']
- mnistm_valid = mnistm['valid']
-
- datasets['mnistm']['train'] = {'images': mnistm_train, 'labels': mnist.train.labels}
- datasets['mnistm']['test'] = {'images': mnistm_test, 'labels': mnist.test.labels}
- datasets['mnistm']['valid'] = {'images': mnistm_valid, 'labels': mnist.validation.labels}
- if sets.get('usps'):
- usps_file = open(data_dir + 'USPS/usps_28x28.pkl', 'rb')
- usps = pickle.load(usps_file)
- n = 5104
- usps_train = (usps[0][0][:n].reshape(-1,28,28,1)*255.).astype('uint8')
- usps_train = np.concatenate([usps_train, usps_train, usps_train], 3)
- usps_valid = (usps[0][0][n:].reshape(-1,28,28,1)*255.).astype('uint8')
- usps_valid = np.concatenate([usps_valid, usps_valid, usps_valid], 3)
- usps_test = (usps[1][0].reshape(-1,28,28,1)*255.).astype('uint8')
- usps_test = np.concatenate([usps_test, usps_test, usps_test], 3)
- usps_images = (np.concatenate([usps[0][0], usps[1][0]]).reshape(-1, 28, 28, 1) * 255.).astype(np.uint8)
- usps_images = np.concatenate([usps_images, usps_images, usps_images], 3)
-
- datasets['usps']['train'] = {'images': usps_train, 'labels': to_one_hot(usps[0][1][:n])}
- datasets['usps']['test'] = {'images': usps_test, 'labels': to_one_hot(usps[1][1])}
- datasets['usps']['valid'] = {'images': usps_valid, 'labels': to_one_hot(usps[0][1][n:])}
-
- if sets.get('cifar'):
- batch_1 = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/data_batch_1.mat')
- batch_2 = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/data_batch_2.mat')
- batch_3 = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/data_batch_3.mat')
- batch_4 = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/data_batch_4.mat')
- batch_5 = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/data_batch_5.mat')
- batch_test = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/cifar-10-batches-mat/test_batch.mat')
-
- train_batch = np.concatenate([batch_1['data'], batch_2['data'], batch_3['data'],
- batch_4['data'], batch_5['data']])
- train_label = np.concatenate([batch_1['labels'], batch_2['labels'], batch_3['labels'],
- batch_4['labels'], batch_5['labels']])
-
- cifar_train_data = np.reshape(train_batch, [-1, 3, 32, 32]).transpose((0,2,3,1))
- cifar_train_label = to_one_hot(np.squeeze(train_label))
-
- cifar_train_data_reduce = cifar_train_data[cifar_train_label[:,6]==0]
- cifar_train_label_tmp = cifar_train_label[cifar_train_label[:,6]==0]
- cifar_train_label_reduce = np.concatenate([cifar_train_label_tmp[:,:6], cifar_train_label_tmp[:,7:]], axis=1)
-
- # cifar_valid_data = cifar_train_data[-5000:]
- # cifar_train_data = cifar_train_data[:-5000]
-
- # cifar_valid_label = cifar_train_label[-5000:]
- # cifar_train_label = cifar_train_label[:-5000]
-
- cifar_valid_data_reduce = cifar_train_data_reduce[-5000:]
- cifar_train_data_reduce = cifar_train_data_reduce[:-5000]
-
- cifar_valid_label_reduce = cifar_train_label_reduce[-5000:]
- cifar_train_label_reduce = cifar_train_label_reduce[:-5000]
-
- cifar_test_data = np.reshape(batch_test['data'], [-1, 3, 32, 32]).transpose((0,2,3,1))
- cifar_test_label = to_one_hot(np.squeeze(batch_test['labels']))
-
- cifar_test_data_reduce = cifar_test_data[cifar_test_label[:,6]==0]
- cifar_test_label_tmp = cifar_test_label[cifar_test_label[:,6]==0]
- cifar_test_label_reduce = np.concatenate([cifar_test_label_tmp[:,:6], cifar_test_label_tmp[:,7:]], axis=1)
-
- datasets['cifar']['train'] = {'images': cifar_train_data_reduce, 'labels': cifar_train_label_reduce}
- datasets['cifar']['test'] = {'images': cifar_test_data_reduce, 'labels': cifar_test_label_reduce}
- datasets['cifar']['valid'] = {'images': cifar_valid_data_reduce, 'labels': cifar_valid_label_reduce}
-
- if sets.get('stl'):
- stl_train = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/stl10_matlab/train.mat')
- stl_train_data = np.reshape(stl_train['X'], [-1, 3, 96, 96]).transpose((0,3,2,1))
-
- stl_train_label = np.squeeze(stl_train['y']-1)
-
- stl_train_label_tmp = np.zeros([stl_train_label.shape[0], 10])
-
- stl_train_label_tmp[stl_train_label==0,0]=1.
- stl_train_label_tmp[stl_train_label==1,2]=1.
- stl_train_label_tmp[stl_train_label==2,1]=1.
- stl_train_label_tmp[stl_train_label==3,3]=1.
- stl_train_label_tmp[stl_train_label==4,4]=1.
- stl_train_label_tmp[stl_train_label==5,5]=1.
- stl_train_label_tmp[stl_train_label==6,7]=1.
- stl_train_label_tmp[stl_train_label==7,6]=1.
- stl_train_label_tmp[stl_train_label==8,8]=1.
- stl_train_label_tmp[stl_train_label==9,9]=1.
-
-
- stl_train_data_reduce = stl_train_data[stl_train_label_tmp[:,6]==0]
- stl_train_label_tmp = stl_train_label_tmp[stl_train_label_tmp[:,6]==0]
- stl_train_label_reduce = np.concatenate([stl_train_label_tmp[:,:6], stl_train_label_tmp[:,7:]], axis=1)
-
-
- stl_test = scipy.io.loadmat('/home/yl353/Peter/new_domain/data/stl10_matlab/test.mat')
- stl_test_data = np.reshape(stl_test['X'], [-1, 3, 96, 96]).transpose((0,3,2,1))
-
- stl_test_label = np.squeeze(stl_test['y']-1)
-
- stl_test_label_tmp = np.zeros([stl_test_label.shape[0], 10])
-
- stl_test_label_tmp[stl_test_label==0,0]=1.
- stl_test_label_tmp[stl_test_label==1,2]=1.
- stl_test_label_tmp[stl_test_label==2,1]=1.
- stl_test_label_tmp[stl_test_label==3,3]=1.
- stl_test_label_tmp[stl_test_label==4,4]=1.
- stl_test_label_tmp[stl_test_label==5,5]=1.
- stl_test_label_tmp[stl_test_label==6,7]=1.
- stl_test_label_tmp[stl_test_label==7,6]=1.
- stl_test_label_tmp[stl_test_label==8,8]=1.
- stl_test_label_tmp[stl_test_label==9,9]=1.
-
- stl_test_data_reduce = stl_test_data[stl_test_label_tmp[:,6]==0]
- stl_test_label_tmp = stl_test_label_tmp[stl_test_label_tmp[:,6]==0]
- stl_test_label_reduce = np.concatenate([stl_test_label_tmp[:,:6], stl_test_label_tmp[:,7:]], axis=1)
-
-
- stl_valid_data_reduce = stl_train_data_reduce[-500:]
- stl_train_data_reduce = stl_train_data_reduce[:-500]
-
- stl_valid_label_reduce = stl_train_label_reduce[-500:]
- stl_train_label_reduce = stl_train_label_reduce[:-500]
-
- stl_train_data_reduce = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in stl_train_data_reduce]
- stl_train_data_reduce = np.concatenate(stl_train_data_reduce)
- stl_test_data_reduce = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in stl_test_data_reduce]
- stl_test_data_reduce = np.concatenate(stl_test_data_reduce)
- stl_valid_data_reduce = [np.expand_dims(cv2.resize(x, dsize=(32,32)), 0) for x in stl_valid_data_reduce]
- stl_valid_data_reduce = np.concatenate(stl_valid_data_reduce)
-
- datasets['stl']['train'] = {'images': stl_train_data_reduce, 'labels': stl_train_label_reduce}
- datasets['stl']['test'] = {'images': stl_test_data_reduce, 'labels': stl_test_label_reduce}
- datasets['stl']['valid'] = {'images': stl_valid_data_reduce, 'labels': stl_valid_label_reduce}
-
- return datasets
-
-
-def save_dataset(datasets,save_path = 'imp./save_datasets/'):
- if not os.path.exists(save_path):
- os.mkdir(save_path)
- for key in datasets.keys():
- train = datasets[key]['train']['images']
- valid = datasets[key]['valid']['images']
- test = datasets[key]['test']['images']
- labtrain = datasets[key]['train']['labels']
- labval = datasets[key]['valid']['labels']
- labtest = datasets[key]['test']['labels']
- scipy.io.savemat(save_path + key + '.mat',{'train':train, 'val':valid,'test':test,'labtrain':labtrain,'labval':labval,'labtest':labtest})
- return 0
-
-def sets_concatenate(datasets, sets):
- N_train = 0
- N_valid = 0
- N_test = 0
-
- for key in sets:
- label_len = datasets[key]['train']['labels'].shape[1]
- N_train = N_train + datasets[key]['train']['images'].shape[0]
- N_valid = N_valid + datasets[key]['valid']['images'].shape[0]
- N_test = N_test + datasets[key]['test']['images'].shape[0]
- S = datasets[key]['train']['images'].shape[1]
- train = {'images': np.zeros((N_train,S,S,3)).astype(np.float32),'labels':np.zeros((N_train,label_len)).astype('float32'),'domains':np.zeros((N_train,)).astype('float32')}
- valid = {'images': np.zeros((N_valid,S,S,3)).astype(np.float32),'labels':np.zeros((N_valid,label_len)).astype('float32'),'domains':np.zeros((N_valid,)).astype('float32')}
- test = {'images': np.zeros((N_test,S,S,3)).astype(np.float32),'labels':np.zeros((N_test,label_len)).astype('float32'),'domains':np.zeros((N_test,)).astype('float32')}
- srt = 0
- edn = 0
- for key in sets:
- domain = sets[key]
- srt = edn
- edn = srt + datasets[key]['train']['images'].shape[0]
- train['images'][srt:edn,:,:,:] = datasets[key]['train']['images']
- train['labels'][srt:edn,:] = datasets[key]['train']['labels']
- train['domains'][srt:edn] = domain * np.ones((datasets[key]['train']['images'].shape[0],)).astype('float32')
- srt = 0
- edn = 0
- for key in sets:
- domain = sets[key]
- srt = edn
- edn = srt + datasets[key]['valid']['images'].shape[0]
- valid['images'][srt:edn,:,:,:] = datasets[key]['valid']['images']
- valid['labels'][srt:edn,:] = datasets[key]['valid']['labels']
- valid['domains'][srt:edn] = domain * np.ones((datasets[key]['valid']['images'].shape[0],)).astype('float32')
- srt = 0
- edn = 0
- for key in sets:
- domain = sets[key]
- srt = edn
- edn = srt + datasets[key]['test']['images'].shape[0]
- test['images'][srt:edn,:,:,:] = datasets[key]['test']['images']
- test['labels'][srt:edn,:] = datasets[key]['test']['labels']
- test['domains'][srt:edn] = domain * np.ones((datasets[key]['test']['images'].shape[0],)).astype('float32')
- return train, valid, test
-
-def source_target(datasets, sources, targets, unify_source = False):
- N1 = len(sources.keys())
- N_domain = N1 + len(targets.keys())
- domain_idx = 0
- for key in sources.keys():
- sources[key] = domain_idx
- domain_idx = domain_idx + 1
- for key in targets.keys():
- targets[key] = domain_idx
- domain_idx = domain_idx + 1
-
- source_train, source_valid, source_test = sets_concatenate(datasets, sources)
- target_train, target_valid, target_test = sets_concatenate(datasets, targets)
-
- if unify_source:
- source_train['domains'] = to_one_hot(0 * source_train['domains'], 2)
- source_valid['domains'] = to_one_hot(0 * source_valid['domains'], 2)
- source_test['domains'] = to_one_hot(0 * source_test['domains'], 2)
- target_train['domains'] = to_one_hot(0 * target_train['domains'] + 1, 2)
- target_valid['domains'] = to_one_hot(0 * target_valid['domains'] + 1, 2)
- target_test['domains'] = to_one_hot(0 * target_test['domains'] + 1, 2)
- else:
- source_train['domains'] = to_one_hot(source_train['domains'], N_domain)
- source_valid['domains'] = to_one_hot(source_valid['domains'], N_domain)
- source_test['domains'] = to_one_hot(source_test['domains'], N_domain)
- target_train['domains'] = to_one_hot(target_train['domains'], N_domain)
- target_valid['domains'] = to_one_hot(target_valid['domains'], N_domain)
- target_test['domains'] = to_one_hot(target_test['domains'], N_domain)
- return source_train, source_valid, source_test, target_train, target_valid, target_test
-
-def normalize(data):
- image_mean = data - np.expand_dims(np.expand_dims(data.mean((1,2)),1),1)
- image_std = np.sqrt((image_mean**2).mean((1,2))+1e-8)
- return image_mean / np.expand_dims(np.expand_dims(image_std,1),1)
-
-def normalize_dataset(datasets, t = 'norm'):
- if t is 'mean':
- temp_data = []
- for key in datasets.keys():
- temp_data.append(datasets[key]['train']['images'])
- temp_data = np.concatenate(temp_data)
- image_mean = temp_data.mean((0, 1, 2))
- image_mean = image_mean.astype('float32')
- for key in datasets.keys():
- datasets[key]['train']['images'] = (datasets[key]['train']['images'].astype('float32') - image_mean)/255.
- datasets[key]['valid']['images'] = (datasets[key]['valid']['images'].astype('float32') - image_mean)/255.
- datasets[key]['test']['images'] = (datasets[key]['test']['images'].astype('float32') - image_mean)/255.
- elif t is 'standard':
- for key in datasets.keys():
- datasets[key]['train']['images'] = (datasets[key]['train']['images'].astype('float32'))/255.
- datasets[key]['valid']['images'] = (datasets[key]['valid']['images'].astype('float32'))/255.
- datasets[key]['test']['images'] = (datasets[key]['test']['images'].astype('float32'))/255.
- elif t is 'none':
- datasets = datasets
- elif t is 'individual':
- for key in datasets.keys():
- temp_data = datasets[key]['train']['images']
- image_mean = temp_data.mean((0, 1, 2))
- image_mean = image_mean.astype('float32')
- datasets[key]['train']['images'] = (datasets[key]['train']['images'].astype('float32') - image_mean)/255.
- datasets[key]['valid']['images'] = (datasets[key]['valid']['images'].astype('float32') - image_mean)/255.
- datasets[key]['test']['images'] = (datasets[key]['test']['images'].astype('float32') - image_mean)/255.
- elif t is 'norm':
- for key in datasets.keys():
- if key =='mnist':
- tmp_1 = datasets[key]['train']['images'][:(len(datasets[key]['train']['images']) // 2)]
- tmp_2 = datasets[key]['train']['images'][(len(datasets[key]['train']['images']) // 2):]
- datasets[key]['train']['images'] = np.concatenate([normalize(tmp_1),normalize(tmp_2)])
- else:
- datasets[key]['train']['images'] = normalize(datasets[key]['train']['images'])
-
- datasets[key]['valid']['images'] = normalize(datasets[key]['valid']['images'])
- datasets[key]['test']['images'] = normalize(datasets[key]['test']['images'])
-
- return datasets
-
-def source_target_separate(datasets, sources, targets):
- N1 = len(sources.keys())
- N_domain = N1 + len(targets.keys())
- domain_idx = 0
- sets = {}
- for key in sources.keys():
- sources[key] = domain_idx
- sets[key] = domain_idx
- domain_idx = domain_idx + 1
- for key in targets.keys():
- targets[key] = domain_idx
- sets[key] = domain_idx
- domain_idx = domain_idx + 1
- for key in datasets.keys():
- datasets[key]['train']['domains'] = to_one_hot(sets[key] * np.ones((datasets[key]['train']['images'].shape[0],)).astype('float32'), N_domain)
- datasets[key]['valid']['domains'] = to_one_hot(sets[key] * np.ones((datasets[key]['valid']['images'].shape[0],)).astype('float32'), N_domain)
- datasets[key]['test']['domains'] = to_one_hot(sets[key] * np.ones((datasets[key]['test']['images'].shape[0],)).astype('float32'), N_domain)
- return datasets
-
-def source_target_separate_baseline(datasets, sources, targets):
- N1 = len(sources.keys())
- N_domain = N1 + len(targets.keys())
- domain_idx = 0
- domains = {}
- for key in sources.keys():
- sources[key] = domain_idx
- domains[key] = domain_idx
- domain_idx = domain_idx + 1
- for key in targets.keys():
- targets[key] = domain_idx
- domains[key] = domain_idx
- for key in datasets.keys():
- datasets[key]['train']['domains'] = to_one_hot(domains[key] * np.ones((datasets[key]['train']['images'].shape[0],)).astype('float32'), 2)
- datasets[key]['valid']['domains'] = to_one_hot(domains[key] * np.ones((datasets[key]['valid']['images'].shape[0],)).astype('float32'), 2)
- datasets[key]['test']['domains'] = to_one_hot(domains[key] * np.ones((datasets[key]['test']['images'].shape[0],)).astype('float32'), 2)
- return datasets
-
-
-# if __name__ == '__main__':
-# data_dir = '../dataset/folder/MNIST_data'
-# if not os.path.exists(data_dir):
-# os.makedirs(data_dir)
-# mnist = input_data.read_data_sets(data_dir, one_hot=True)
-# mnist_train = (mnist.train.images.reshape(55000, 28, 28, 1) * 255).astype(np.uint8)
-# mnist_train = np.concatenate([mnist_train, mnist_train, mnist_train], 3)
-#
-# # mnist_inv = mnist_train * (-1) + 255
-# # mnist_train = np.concatenate([mnist_train, mnist_inv])
-# mnist_test = (mnist.test.images.reshape(10000, 28, 28, 1) * 255).astype(np.uint8)
-# mnist_test = np.concatenate([mnist_test, mnist_test, mnist_test], 3)
-# mnist_valid = (mnist.validation.images.reshape(5000, 28, 28, 1) * 255).astype(np.uint8)
-# mnist_valid = np.concatenate([mnist_valid, mnist_valid, mnist_valid], 3)
-# # datasets['mnist']['train'] = {'images': mnist_train, 'labels': np.concatenate([mnist.train.labels, mnist.train.labels])}
-# print(mnist_test.shape)
-# print(mnist)
-
-
-
--
Gitee
From 6e1a5de3d08e9e93db280b7f24ce0a7885fa9933 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BF=83=E6=80=9D=E7=BC=A0=E7=BB=B5?= <869083529@qq.com>
Date: Wed, 20 Jul 2022 09:41:07 +0000
Subject: [PATCH 12/16] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20Te?=
=?UTF-8?q?nsorFlow/contrib/cv/club/CLUB=5Ftf=5Fwubo9826/main=5FDANN.py?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../cv/club/CLUB_tf_wubo9826/main_DANN.py | 383 ------------------
1 file changed, 383 deletions(-)
delete mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/main_DANN.py
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/main_DANN.py b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/main_DANN.py
deleted file mode 100644
index 0f63749a7..000000000
--- a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/main_DANN.py
+++ /dev/null
@@ -1,383 +0,0 @@
-from npu_bridge.npu_init import *
-import os
-import argparse
-
-import numpy as np
-#from sklearn.manifold import TSNE
-#import scipy.io
-
-import tensorflow as tf
-#import tensorflow.contrib.slim as slim
-
-from MNISTModel_DANN import MNISTModel_DANN
-import imageloader as dataloader
-import utils
-from tqdm import tqdm
-
-import moxing as mox
-import precision_tool.tf_config as npu_tf_config
-import precision_tool.lib.config as CONFIG
-
-os.environ['CUDA_VISIBLE_DEVICES'] = '0' #0、1使用GPU的编号 此处由0改为1
-
-gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.25)
-
-parser = argparse.ArgumentParser(description="Domain Adaptation Training")
-parser.add_argument("--data_url", type=str, default="obs://cann-id1254/dataset/", help="path to dataset folder") #注意路径
-parser.add_argument("--train_url", type=str, default="./output")
-parser.add_argument("--save_path", type=str, default="obs://cann-id1254/savef/", help="path to save experiment output") #注意路径
-parser.add_argument("--source", type=str, default="svhn", help="specify source dataset")
-parser.add_argument("--target", type=str, default="mnist", help="specify target dataset")
-
-
-args, unparsed = parser.parse_known_args()
-
-
-data_path = args.data_url
-save_path = args.save_path
-batch_size = 64 # 64
-num_steps = 5000 # 原来是15000
-epsilon = 0.5
-M = 0.1
-num_test_steps = 5000
-valid_steps = 100
-
-# 在ModelArts容器创建数据存放目录
-data_dir = "/cache/dataset/"
-os.makedirs(data_dir)
-print("已创建!!!!!!!!!!!!11")
-
-savePath= "/cache/savePath/"
-os.makedirs(savePath)
-print("已创建!!!!!!!!!!!!11")
-
-model_dir = "/cache/result"
-os.makedirs(model_dir)
-
-# OBS数据拷贝到ModelArts容器内
-mox.file.copy_parallel(data_path, data_dir)
-
-#由于把桶中数据拷贝到modelArts上了 那么下面数据加载的地址也由data_path变为data_dir
-datasets = dataloader.load_datasets(data_dir,{args.source:1,args.target:1}) #data_path变为data_dir
-# d_train = datasets['mnist']['train'].get('images')
-# print("----------------------------------",len(d_train))
-# print("----------------------------------",len(d_train))
-# print("----------------------------------",len(d_train))
-# d1 = datasets.keys()
-# print(d1)
-# d_m = datasets.get('mnist')
-# print("mnist",d_m.keys())
-# d_m_train_d = d_m.get('train').get('images')
-# print("mnist train,test,valid",d_m_train_d.shape) #,d_m.get('test'),d_m.get('valid')
-# mnist_train_samples = d_m_train_d.shape[0]
-# end = mnist_train_samples // batch_size * batch_size
-# print("end sample ",end)
-# d_m_train_d = d_m_train_d[:end]
-# d_m_train_l =
-# print(d_m_train_d.shape)
-
-# d2 = datasets.get('svhn')
-# d3 = d2.get('train')
-# d4 = d3['images']
-# print(d2.keys())
-# print(d3.keys())
-# print(d4.shape)
-
-
-# dataset = dataloader.normalize_dataset(dataset)
-sources = {args.source:1}
-targets = {args.target:1}
-description = utils.description(sources, targets)
-source_train, source_valid, source_test, target_train, target_valid, target_test = dataloader.source_target(datasets, sources, targets, unify_source = True)
-
-options = {}
-options['sample_shape'] = (28,28,3)
-options['num_domains'] = 2
-options['num_targets'] = 1
-options['num_labels'] = 10
-options['batch_size'] = batch_size
-options['G_iter'] = 1
-options['D_iter'] = 1
-options['ef_dim'] = 32
-options['latent_dim'] = 128
-options['t_idx'] = np.argmax(target_test['domains'][0])
-options['source_num'] = batch_size
-options['target_num'] = batch_size
-options['reg_disc'] = 0.1
-options['reg_con'] = 0.1
-options['lr_g'] = 0.001
-options['lr_d'] = 0.001
-options['reg_tgt'] = 1.0
-description = utils.description(sources, targets)
-description = description + '_DANN_' + str(options['reg_disc'])
-
-
-tf.reset_default_graph()
-graph = tf.get_default_graph()
-model = MNISTModel_DANN(options)
-
-# 浮点异常检测
-# config = npu_config_proto(config_proto=tf.ConfigProto(gpu_options=gpu_options))
-# config = npu_tf_config.session_dump_config(config, action='overflow')
-# sess = tf.Session(graph = graph, config=config)
-
-# 关闭全部融合规则
-# config = npu_config_proto(config_proto=tf.ConfigProto(gpu_options=gpu_options))
-# config = npu_tf_config.session_dump_config(config, action='fusion_off')
-# sess = tf.Session(graph = graph,config=config)
-
-# ModelArts训练 创建临时的性能数据目录
-profiling_dir = "/cache/profiling"
-os.makedirs(profiling_dir)
-
-# 混合精度 单使用混合精度,精度达标与V100相同 Job:5-20-10-56
-config_proto = tf.ConfigProto(gpu_options=gpu_options)
-custom_op = config_proto.graph_options.rewrite_options.custom_optimizers.add()
-custom_op.name = 'NpuOptimizer'
-# 开启混合精度
-custom_op.parameter_map["precision_mode"].s = tf.compat.as_bytes("allow_mix_precision")
-# 开启profiling采集 Job:5-20-19-16
-custom_op.parameter_map["profiling_mode"].b = True
-# # 仅采集任务轨迹数据
-# custom_op.parameter_map["profiling_options"].s = tf.compat.as_bytes('{"output":"/cache/profiling","task_trace":"on"}')
-
-# 采集任务轨迹数据和迭代轨迹数据。可先仅采集任务轨迹数据,如果仍然无法分析到具体问题,可再采集迭代轨迹数据
-custom_op.parameter_map["profiling_options"].s = tf.compat.as_bytes('{"output":"/cache/profiling","task_trace":"on","training_trace":"on","fp_point":"resnet_model/conv2d/Conv2Dresnet_model/batch_normalization/FusedBatchNormV3_Reduce","bp_point":"gradients/AddN_70"}')
-
-config = npu_config_proto(config_proto=config_proto)
-sess = tf.Session(graph = graph,config=config)
-
-# 单使用LossScale Job:5-20-15-05
-
-# # 混合精度 + LossScale + 溢出数据采集 Job:5-20-16-56
-# # 1.混合精度
-# config_proto = tf.ConfigProto(gpu_options=gpu_options)
-# custom_op = config_proto.graph_options.rewrite_options.custom_optimizers.add()
-# custom_op.name = 'NpuOptimizer'
-# custom_op.parameter_map["precision_mode"].s = tf.compat.as_bytes("allow_mix_precision")
-# # 2.溢出数据采集
-# overflow_data_dir = "/cache/overflow"
-# os.makedirs(overflow_data_dir)
-# # dump_path:dump数据存放路径,该参数指定的目录需要在启动训练的环境上(容器或Host侧)提前创建且确保安装时配置的运行用户具有读写权限
-# custom_op.parameter_map["dump_path"].s = tf.compat.as_bytes(overflow_data_dir)
-# # enable_dump_debug:是否开启溢出检测功能
-# custom_op.parameter_map["enable_dump_debug"].b = True
-# # dump_debug_mode:溢出检测模式,取值:all/aicore_overflow/atomic_overflow
-# custom_op.parameter_map["dump_debug_mode"].s = tf.compat.as_bytes("all")
-# config = npu_config_proto(config_proto=config_proto)
-# sess = tf.Session(graph = graph,config=config)
-
-# 源代码迁移后的sess
-#sess = tf.Session(graph = graph, config=npu_config_proto(config_proto=tf.ConfigProto(gpu_options=gpu_options)))
-
-tf.global_variables_initializer().run(session = sess)
-
-record = []
-
-gen_source_batch = utils.batch_generator([source_train['images'],
- source_train['labels'],
- source_train['domains']], batch_size)
-
-print("gen_source_batch ",gen_source_batch)
-gen_target_batch = utils.batch_generator([target_train['images'],
- target_train['labels'],
- target_train['domains']], batch_size)
-print("gen_targe_batch ",gen_target_batch)
-gen_source_batch_valid = utils.batch_generator([np.concatenate([source_valid['images'], source_test['images']]),
- np.concatenate([source_valid['labels'], source_test['labels']]),
- np.concatenate([source_valid['domains'], source_test['domains']])],
- batch_size)
-print("gen_source_batch_valid ",gen_source_batch_valid)
-gen_target_batch_valid = utils.batch_generator([np.concatenate([target_valid['images'], target_test['images']]),
- np.concatenate([target_valid['labels'], target_test['labels']]),
- np.concatenate([target_valid['domains'], target_test['domains']])],
- batch_size)
-print("gen_target_batch_valid",gen_target_batch_valid)
-# source_data_valid = np.concatenate([source_valid['images'], source_test['images']])
-# target_data_valid = np.concatenate([target_valid['images'], target_test['images']])
-# source_label_valid = np.concatenate([source_valid['labels'], source_test['labels']])
-#
-# print("source_data_valid ",source_data_valid.shape)
-# print("target_data_valid ",target_data_valid.shape)
-# print("source_label_valid ",source_label_valid.shape)
-
-#save_path = './Result/' + description + '/'
-# print("save_path",save_path)
-
-# #创建保存文件夹
-# if not os.path.exists(save_path):
-# print("Creating ",save_path)
-# os.makedirs(save_path)
-
-# save_path = os.path.join(savePath, description)
-# print("save_path",save_path)
-# if not os.path.exists(save_path):
-# print("Creating!!!")
-# os.mkdir(save_path)
-
-def compute_MMD(H_fake, H_real, sigma_range=[5]):
-
- min_len = min([len(H_real),len(H_fake)])
- h_real = H_real[:min_len]
- h_fake = H_fake[:min_len]
-
- dividend = 1
- dist_x, dist_y = h_fake/dividend, h_real/dividend
- x_sq = np.expand_dims(np.sum(dist_x**2, axis=1), 1) # 64*1
- y_sq = np.expand_dims(np.sum(dist_y**2, axis=1), 1) # 64*1
- dist_x_T = np.transpose(dist_x)
- dist_y_T = np.transpose(dist_y)
- x_sq_T = np.transpose(x_sq)
- y_sq_T = np.transpose(y_sq)
-
- tempxx = -2*np.matmul(dist_x,dist_x_T) + x_sq + x_sq_T # (xi -xj)**2
- tempxy = -2*np.matmul(dist_x,dist_y_T) + x_sq + y_sq_T # (xi -yj)**2
- tempyy = -2*np.matmul(dist_y,dist_y_T) + y_sq + y_sq_T # (yi -yj)**2
-
-
- for sigma in sigma_range:
- kxx, kxy, kyy = 0, 0, 0
- kxx += np.mean(np.exp(-tempxx/2/(sigma**2)))
- kxy += np.mean(np.exp(-tempxy/2/(sigma**2)))
- kyy += np.mean(np.exp(-tempyy/2/(sigma**2)))
-
- gan_cost_g = np.sqrt(kxx + kyy - 2*kxy)
- return gan_cost_g
-
-best_valid = -1
-best_acc = -1
-
-best_src_acc = -1
-best_src = -1
-
-best_bound_acc = -1
-best_bound = 100000
-
-best_iw_acc = -1
-best_iw = 100000
-
-best_ben_acc = -1
-best_ben = 100000
-
-#output_file = save_path+'acc.txt'
-output_file = savePath+description+"acc.txt"
-print('Training...')
-with open(output_file, 'w') as fout:
- for i in tqdm(range(1, num_steps + 1)):
-
- #print("step ",i)
- # Adaptation param and learning rate schedule as described in the paper
-
- X0, y0, d0 = gen_source_batch.__next__() # python2.x的g.next()函数已经更名为g.__next__()或next(g)也能达到相同效果。
- X1, y1, d1 = gen_target_batch.__next__()
-
- X = np.concatenate([X0, X1], axis = 0)
- #print("Input X ",X.shape)
- d = np.concatenate([d0, d1], axis = 0)
- #print("Input d ",d.shape)
- #print("Input y0 ",y0.shape)
- for j in range(options['D_iter']):
- # Update Adversary
- _, mi_loss = \
- sess.run([model.train_mi_ops, model.bound],
- feed_dict={model.X:X, model.train: True})
-
- for j in range(options['G_iter']):
- # Update Feature Extractor & Lable Predictor np.array([0,0,1,1]).astype('float32')
- _, tploss, tp_acc = \
- sess.run([model.train_context_ops, model.y_loss, model.y_acc],
- feed_dict={model.X: X, model.y: y0, model.train: True})
-
- for j in range(options['G_iter']):
- # Update Feature Extractor & Lable Predictor np.array([0,0,1,1]).astype('float32')
- _, td_loss, td_acc = \
- sess.run([model.train_domain_ops, model.d_loss, model.d_acc],
- feed_dict={model.X: X, model.domains: d, model.train: True})
-
- if i % 10 == 0:
- print ('%s iter %d mi_loss: %.4f d_loss: %.4f p_acc: %.4f' % \
- (description, i, mi_loss, td_loss, tp_acc))
-
- '''
- if i % valid_steps == 0:
- # Calculate bound
- # init_new_vars_op = tf.initialize_variables(model.domain_test_vars)
- # sess.run(init_new_vars_op)
-
- # for s in range(num_test_steps):
- # X0_test, y0_test, d0_test = gen_source_batch.next()
- # X1_test, y1_test, d1_test = gen_target_batch.next()
-
- # X_test = np.concatenate([X0_test, X1_test], axis = 0)
- # d_test = np.concatenate([d0_test, d1_test], axis = 0)
-
- # _ = sess.run(model.test_domain_ops, feed_dict={model.X:X_test,
- # model.domains: d_test, model.train: False})
-
- # source_pq = utils.get_data_pq(sess, model, source_data_valid)
- # target_pq = utils.get_data_pq(sess, model, target_data_valid)
-
- # st_ratio = float(source_train['images'].shape[0]) / target_train['images'].shape[0]
-
- # src_qp = source_pq[:,1] / source_pq[:,0] * st_ratio
- # tgt_qp = target_pq[:,1] / target_pq[:,0] * st_ratio
-
- # w_source_pq = np.copy(src_qp)
- # w_source_pq[source_pq[:,0]=1),(target_pq[:,0]=1),(source_pq[:,0] best_valid:
- best_params = utils.get_params(sess)
- best_valid = target_valid_acc
- best_acc = target_test_acc
-
- labd = np.concatenate((source_test['domains'], target_test['domains']), axis = 0)
- print ('src valid: %.4f tgt valid: %.4f tgt test: %.4f best: %.4f ' % \
- (source_valid_acc, target_valid_acc, target_test_acc, best_acc) )
-
- acc_store = '%.4f, %.4f, %.4f, %.4f \n'%(source_valid_acc, target_valid_acc, target_test_acc, best_acc)
- fout.write(acc_store)
- '''
-
-#训练结束后,将ModelArts容器内的训练输出拷贝到OBS
-mox.file.copy_parallel(savePath, save_path)
-
-# 训练结束后,将ModelArts容器内的训练输出拷贝到OBS
-# mox.file.copy_parallel(model_dir, args.train_url)
-# mox.file.copy_parallel(CONFIG.ROOT_DIR, args.train_url) #浮点异常数据保存至OBS
-# mox.file.copy_parallel(overflow_data_dir,args.train_url) #溢出数据保存至OBS
-mox.file.copy_parallel(profiling_dir, args.train_url) #性能数据保存至OBS
\ No newline at end of file
--
Gitee
From 89d6e2556809418e573237c8dc49aed09059bba2 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BF=83=E6=80=9D=E7=BC=A0=E7=BB=B5?= <869083529@qq.com>
Date: Wed, 20 Jul 2022 09:41:16 +0000
Subject: [PATCH 13/16] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20Te?=
=?UTF-8?q?nsorFlow/contrib/cv/club/CLUB=5Ftf=5Fwubo9826/utils.py?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../contrib/cv/club/CLUB_tf_wubo9826/utils.py | 364 ------------------
1 file changed, 364 deletions(-)
delete mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/utils.py
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/utils.py b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/utils.py
deleted file mode 100644
index 1c44d36ec..000000000
--- a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/utils.py
+++ /dev/null
@@ -1,364 +0,0 @@
-from npu_bridge.npu_init import *
-import tensorflow as tf
-import numpy as np
-import matplotlib.patches as mpatches
-# import matplotlib.pyplot as plt
-from mpl_toolkits.axes_grid1 import ImageGrid
-from sklearn import metrics
-import scipy
-# Model construction utilities below adapted from
-# https://www.tensorflow.org/versions/r0.8/tutorials/mnist/pros/index.html#deep-mnist-for-experts
-
-def get_params(sess):
- variables = tf.trainable_variables()
- params = {}
- for i in range(len(variables)):
- name = variables[i].name
- params[name] = sess.run(variables[i])
- return params
-
-
-def to_one_hot(x, N = -1):
- x = x.astype('int32')
- if np.min(x) !=0 and N == -1:
- x = x - np.min(x)
- x = x.reshape(-1)
- if N == -1:
- N = np.max(x) + 1
- label = np.zeros((x.shape[0],N))
- idx = range(x.shape[0])
- label[idx,x] = 1
- return label.astype('float32')
-
-def image_mean(x):
- x_mean = x.mean((0, 1, 2))
- return x_mean
-
-def shape(tensor):
- """
- Get the shape of a tensor. This is a compile-time operation,
- meaning that it runs when building the graph, not running it.
- This means that it cannot know the shape of any placeholders
- or variables with shape determined by feed_dict.
- """
- return tuple([d.value for d in tensor.get_shape()])
-
-
-def fully_connected_layer(in_tensor, out_units):
- """
- Add a fully connected layer to the default graph, taking as input `in_tensor`, and
- creating a hidden layer of `out_units` neurons. This should be done in a new variable
- scope. Creates variables W and b, and computes activation_function(in * W + b).
- """
- _, num_features = shape(in_tensor)
- weights = tf.get_variable(name = "weights", shape = [num_features, out_units], initializer = tf.truncated_normal_initializer(stddev=0.1))
- biases = tf.get_variable( name = "biases", shape = [out_units], initializer=tf.constant_initializer(0.1))
- return tf.matmul(in_tensor, weights) + biases
-
-
-def conv2d(in_tensor, filter_shape, out_channels):
- """
- Creates a conv2d layer. The input image (whish should already be shaped like an image,
- a 4D tensor [N, W, H, C]) is convolved with `out_channels` filters, each with shape
- `filter_shape` (a width and height). The ReLU activation function is used on the
- output of the convolution.
- """
- _, _, _, channels = shape(in_tensor)
- W_shape = filter_shape + [channels, out_channels]
-
- # create variables
- weights = tf.get_variable(name = "weights", shape = W_shape, initializer=tf.truncated_normal_initializer(stddev=0.1))
- biases = tf.get_variable(name = "biases", shape = [out_channels], initializer= tf.constant_initializer(0.1))
- conv = tf.nn.conv2d( in_tensor, weights, strides=[1, 1, 1, 1], padding='SAME')
- h_conv = conv + biases
- return h_conv
-
-
-#def conv1d(in_tensor, filter_shape, out_channels):
-# _, _, channels = shape(in_tensor)
-# W_shape = [filter_shape, channels, out_channels]
-#
-# W = tf.truncated_normal(W_shape, dtype = tf.float32, stddev = 0.1)
-# weights = tf.Variable(W, name = "weights")
-# b = tf.truncated_normal([out_channels], dtype = tf.float32, stddev = 0.1)
-# biases = tf.Variable(b, name = "biases")
-# conv = tf.nn.conv1d(in_tensor, weights, stride=1, padding='SAME')
-# h_conv = conv + biases
-# return h_conv
-
-def vars_from_scopes(scopes):
- """
- Returns list of all variables from all listed scopes. Operates within the current scope,
- so if current scope is "scope1", then passing in ["weights", "biases"] will find
- all variables in scopes "scope1/weights" and "scope1/biases".
- """
- current_scope = tf.get_variable_scope().name
- #print(current_scope)
- if current_scope != '':
- scopes = [current_scope + '/' + scope for scope in scopes]
- var = []
- for scope in scopes:
- for v in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=scope):
- var.append(v)
- return var
-
-def tfvar2str(tf_vars):
- names = []
- for i in range(len(tf_vars)):
- names.append(tf_vars[i].name)
- return names
-
-
-def shuffle_aligned_list(data):
- """Shuffle arrays in a list by shuffling each array identically."""
- num = data[0].shape[0]
- p = np.random.permutation(num)
- return [d[p] for d in data]
-
-def normalize_images(img_batch):
- fl = tf.cast(img_batch, tf.float32)
- return tf.map_fn(tf.image.per_image_standardization, fl)
-
-
-def batch_generator(data, batch_size, shuffle=True):
- """Generate batches of data.
-
- Given a list of array-like objects, generate batches of a given
- size by yielding a list of array-like objects corresponding to the
- same slice of each input.
- """
- if shuffle:
- data = shuffle_aligned_list(data)
-
- batch_count = 0
- while True:
- if batch_count * batch_size + batch_size >= len(data[0]):
- batch_count = 0
-
- if shuffle:
- data = shuffle_aligned_list(data)
-
- start = batch_count * batch_size
- end = start + batch_size
- batch_count += 1
- yield [d[start:end] for d in data]
-
-
-def get_auc(predictions, labels):
- fpr, tpr, thresholds = metrics.roc_curve(np.squeeze(labels).astype('float32'), np.squeeze(predictions).astype('float32'), pos_label=2)
- return metrics.auc(fpr, tpr)
-
-
-def predictor_accuracy(predictions, labels):
- """
- Returns a number in [0, 1] indicating the percentage of `labels` predicted
- correctly (i.e., assigned max logit) by `predictions`.
- """
- return tf.reduce_mean(tf.cast(tf.equal(tf.argmax(predictions, 1), tf.argmax(labels, 1)),tf.float32))
-
-def get_Wasser_distance(sess, model, data, L = 1, batch = 1024):
- N = data.shape[0]
- n = np.ceil(N/batch).astype(np.int32)
- Wasser = np.zeros((N,L))
- if L == 1:
- Wasser = Wasser.reshape(-1)
- l = np.float32(1.)
- srt = 0
- edn = 0
- for i in range(n + 1):
- srt = edn
- edn = min(N, srt + batch - 1)
- X = data[srt:edn]
- if L == 1:
- Wasser[srt:edn] = sess.run(model.d_pred,feed_dict={model.X: X.astype('float32'), model.lr_g:l, model.train: False})
- else:
- Wasser[srt:edn,:] = sess.run(model.d_pred,feed_dict={model.X: X.astype('float32'), model.lr_g:l, model.train: False})
- return Wasser
-
-def get_data_pred(sess, model, obj_acc, data, labels, batch = 1024):
- N = data.shape[0]
- n = np.ceil(N/batch).astype(np.int32)
- if obj_acc == 'feature':
- temp = sess.run(model.features,feed_dict={model.X: data[0:2].astype('float32'), model.train: False})
- pred = np.zeros((data.shape[0],temp.shape[1])).astype('float32')
- else:
- pred= np.zeros(labels.shape).astype('float32')
- srt = 0
- edn = 0
- for i in range(n + 1):
- srt = edn
- edn = min(N, srt + batch)
- X = data[srt:edn]
- if obj_acc is 'y':
- pred[srt:edn,:] = sess.run(model.y_pred,feed_dict={model.X: X.astype('float32'), model.train: False})
- elif obj_acc is 'd':
- if i == 0:
- temp = sess.run(model.d_pred,feed_dict={model.X: X.astype('float32'), model.train: False})
- pred= np.zeros((labels.shape[0], temp.shape[1])).astype('float32')
- pred[srt:edn,:]= sess.run(model.d_pred,feed_dict={model.X: X.astype('float32'), model.train: False})
- elif obj_acc is 'feature':
- pred[srt:edn] = sess.run(model.features,feed_dict={model.X: X.astype('float32'), model.train: False})
- return pred
-
-def get_data_pq(sess, model, data, batch = 1024):
- N = data.shape[0]
- n = np.ceil(N/batch).astype(np.int32)
-
- z_pq = np.zeros([N, model.num_domains]).astype('float32')
-
- srt = 0
- edn = 0
- for i in range(n + 1):
- srt = edn
- edn = min(N, srt + batch)
- X = data[srt:edn]
-
- z_pq[srt:edn,:] = sess.run(model.test_pq,feed_dict={model.X: X.astype('float32'), model.train: False})
-
- return z_pq
-
-def get_feature(sess, model, data, batch = 1024):
- N = data.shape[0]
- n = np.ceil(N/batch).astype(np.int32)
-
- feature = np.zeros([N, model.feature_dim]).astype('float32')
-
- srt = 0
- edn = 0
- for i in range(n + 1):
- srt = edn
- edn = min(N, srt + batch)
- X = data[srt:edn]
-
- feature[srt:edn,:] = sess.run(model.features,feed_dict={model.X: X.astype('float32'), model.train: False})
-
- return feature
-
-def get_y_loss(sess, model, data, label, batch = 1024):
- N = data.shape[0]
- n = np.ceil(N/batch).astype(np.int32)
-
- y_loss = np.zeros(N).astype('float32')
-
- srt = 0
- edn = 0
- for i in range(n + 1):
- srt = edn
- edn = min(N, srt + batch)
- X = data[srt:edn]
- y = label[srt:edn]
-
- y_loss[srt:edn] = sess.run(model.y_loss,feed_dict={model.X: X.astype('float32'), model.y: y, model.train: False})
-
- return y_loss
-
-
-def get_acc(pred, label):
- if len(pred.shape) > 1:
- pred = np.argmax(pred,axis = 1)
- if len(label.shape) > 1:
- label = np.argmax(label, axis = 1)
- #pdb.set_trace()
- acc = (pred == label).sum().astype('float32')
- return acc/label.shape[0]
-
-
-# def imshow_grid(images, shape=[2, 8]):
-# """Plot images in a grid of a given shape."""
-# fig = plt.figure(1)
-# grid = ImageGrid(fig, 111, nrows_ncols=shape, axes_pad=0.05)
-
-# size = shape[0] * shape[1]
-# for i in range(size):
-# grid[i].axis('off')
-# grid[i].imshow(images[i]) # The AxesGrid object work as a list of axes.
-
-# plt.show()
-
-def dic2list(sources, targets):
- names_dic = {}
- for key in sources:
- names_dic[sources[key]] = key
- for key in targets:
- names_dic[targets[key]] = key
- names = []
- for i in range(len(names_dic)):
- names.append(names_dic[i])
- return names
-
-# def plot_embedding(X, y, d, names, title=None):
-# """Plot an embedding X with the class label y colored by the domain d."""
-
-# x_min, x_max = np.min(X, 0), np.max(X, 0)
-# X = (X - x_min) / (x_max - x_min)
-# colors = np.array([[0.6,0.4,1.0,1.0],
-# [1.0,0.1,1.0,1.0],
-# [0.6,1.0,0.6,1.0],
-# [0.1,0.4,0.4,1.0],
-# [0.4,0.6,0.1,1.0],
-# [0.4,0.4,0.4,0.4]]
-# )
-# # Plot colors numbers
-# plt.figure(figsize=(10,10))
-# ax = plt.subplot(111)
-# for i in range(X.shape[0]):
-# # plot colored number
-# plt.text(X[i, 0], X[i, 1], str(y[i]),
-# color=colors[d[i]],
-# fontdict={'weight': 'bold', 'size': 9})
-
-# plt.xticks([]), plt.yticks([])
-# patches = []
-# for i in range(max(d)+1):
-# patches.append( mpatches.Patch(color=colors[i], label=names[i]))
-# plt.legend(handles=patches)
-# if title is not None:
-# plt.title(title)
-
-def load_plot(file_name):
- mat = scipy.io.loadmat(file_name)
- dann_tsne = mat['dann_tsne']
- test_labels = mat['test_labels']
- test_domains = mat['test_domains']
- names = mat['names']
- plot_embedding(dann_tsne, test_labels.argmax(1), test_domains.argmax(1), names, 'Domain Adaptation')
-
-
-
-def softmax(x):
- """Compute softmax values for each sets of scores in x."""
- e_x = np.exp(x - np.max(x))
- return e_x / e_x.sum(axis=0)
-
-def norm_matrix(X, l):
- Y = np.zeros(X.shape);
- for i in range(X.shape[0]):
- Y[i] = X[i]/np.linalg.norm(X[i],l)
- return Y
-
-
-def description(sources, targets):
- source_names = sources.keys()
- #print(source_names)
- target_names = targets.keys()
- N = min(len(source_names), 4)
-
- source_names = list(source_names) #将键转为list列表后可进行索引操作
- target_names = list(target_names)
- description = source_names[0] #源代码是source_names[0] 在python3中keys不允许切片,先转List再切片就好了
- for i in range(1,N):
- description = description + '_' + source_names[i]
- description = description + '-' + target_names[0]
- return description
-
-def channel_dropout(X, p):
- if p == 0:
- return X
- mask = tf.random_uniform(shape = [tf.shape(X)[0], tf.shape(X)[2]])
- mask = mask + 1 - p
- mask = tf.floor(mask)
- dropout = tf.expand_dims(mask,axis = 1) * X/(1-p)
- return dropout
-
-def sigmoid(x):
- return 1 / (1 + np.exp(-x))
--
Gitee
From 4683898d7a76f05432557d0fe0893a2a3430948d Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BF=83=E6=80=9D=E7=BC=A0=E7=BB=B5?= <869083529@qq.com>
Date: Wed, 20 Jul 2022 09:41:26 +0000
Subject: [PATCH 14/16] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20Te?=
=?UTF-8?q?nsorFlow/contrib/cv/club/CLUB=5Ftf=5Fwubo9826/MNISTModel=5FDANN?=
=?UTF-8?q?.py?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../club/CLUB_tf_wubo9826/MNISTModel_DANN.py | 469 ------------------
1 file changed, 469 deletions(-)
delete mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MNISTModel_DANN.py
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MNISTModel_DANN.py b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MNISTModel_DANN.py
deleted file mode 100644
index 41ffd3c07..000000000
--- a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/MNISTModel_DANN.py
+++ /dev/null
@@ -1,469 +0,0 @@
-from npu_bridge.npu_init import *
-import tensorflow as tf
-import utils
-import math
-import tensorflow.contrib.layers as layers
-#import keras.backend as K
-
-
-def leaky_relu(x, a=0.1):
- return tf.maximum(x, a * x)
-
-def noise(x, phase=True, std=1.0):
- eps = tf.random_normal(tf.shape(x), 0.0, std)
- output = tf.where(phase, x + eps, x)
- return output
-
-class MNISTModel_DANN(object):
- """Simple MNIST domain adaptation model."""
- def __init__(self, options):
- self.reg_disc = options['reg_disc']
- self.reg_con = options['reg_con']
- self.reg_tgt = options['reg_tgt']
- self.lr_g = options['lr_g']
- self.lr_d = options['lr_d']
- self.sample_type = tf.float32
- self.num_labels = options['num_labels']
- self.num_domains = options['num_domains']
- self.num_targets = options['num_targets']
- self.sample_shape = options['sample_shape']
- self.ef_dim = options['ef_dim']
- self.latent_dim = options['latent_dim']
- self.batch_size = options['batch_size']
- self.initializer = tf.contrib.layers.xavier_initializer()
- # self.initializer = tf.truncated_normal_initializer(stddev=0.1)
- self.X = tf.placeholder(tf.as_dtype(self.sample_type), [None] + list(self.sample_shape), name="input_X")
- #print(self.X)
- #self.X = tf.placeholder(tf.as_dtype(self.sample_type), [None]+[28,28,3], name="input_X")
- #print(self.X)
- self.y = tf.placeholder(tf.float32, [None, self.num_labels], name="input_labels")
- #print(self.y)
- #self.y = tf.placeholder(tf.float32, [self.batch_size,self.num_labels], name="input_labels")
- self.domains = tf.placeholder(tf.float32, [None, self.num_domains], name="input_domains")
- self.train = tf.placeholder(tf.bool, [], name = 'train')
- self._build_model()
- self._setup_train_ops()
-
- # def feature_extractor(self, reuse = False):
- # input_X = utils.normalize_images(self.X)
- # with tf.variable_scope('feature_extractor_conv1',reuse = reuse):
- # h_conv1 = layers.conv2d(input_X, self.ef_dim, 3, stride=1,
- # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- # h_pool1 = layers.max_pool2d(h_conv1, [2, 2], 2, padding='SAME')
-
- # with tf.variable_scope('feature_extractor_conv2',reuse = reuse):
- # h_conv2 = layers.conv2d(h_pool1, self.ef_dim * 2, 3, stride=1,
- # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- # h_pool2 = layers.max_pool2d(h_conv2, [2, 2], 2, padding='SAME')
-
- # with tf.variable_scope('feature_extractor_conv3',reuse = reuse):
- # h_conv3 = layers.conv2d(h_pool2, self.ef_dim * 4, 3, stride=1,
- # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- # h_pool3 = layers.max_pool2d(h_conv3, [2, 2], 2, padding='SAME')
-
- # with tf.variable_scope('feature_extractor_fc1'):
- # fc_input = layers.flatten(h_pool3)
- # fc_1 = layers.fully_connected(inputs=fc_input, num_outputs=self.latent_dim,
- # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
-
- # self.features = fc_1
- # feature_shape = self.features.get_shape()
- # self.feature_dim = feature_shape[1].value
-
- # self.features_src = tf.slice(self.features, [0, 0], [self.batch_size, -1])
- # self.features_for_prediction = tf.cond(self.train, lambda: tf.slice(self.features, [0, 0], [self.batch_size, -1]), lambda: self.features)
-
- # def feature_extractor(self, reuse = False):
- # input_X = utils.normalize_images(self.X)
- # with tf.variable_scope('feature_extractor_conv1',reuse = reuse):
- # h_conv1 = layers.conv2d(input_X, self.ef_dim, 3, stride=1,
- # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- # h_conv1 = layers.conv2d(h_conv1, self.ef_dim, 3, stride=1,
- # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- # h_conv1 = layers.max_pool2d(h_conv1, [2, 2], 2, padding='SAME')
-
- # with tf.variable_scope('feature_extractor_conv2',reuse = reuse):
- # h_conv2 = layers.conv2d(h_conv1, self.ef_dim * 2, 3, stride=1,
- # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- # h_conv2 = layers.conv2d(h_conv2, self.ef_dim * 2, 3, stride=1,
- # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- # h_conv2 = layers.max_pool2d(h_conv2, [2, 2], 2, padding='SAME')
-
- # with tf.variable_scope('feature_extractor_conv3',reuse = reuse):
- # h_conv3 = layers.conv2d(h_conv2, self.ef_dim * 4, 3, stride=1,
- # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- # h_conv3 = layers.conv2d(h_conv3, self.ef_dim * 4, 3, stride=1,
- # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- # h_conv3 = layers.max_pool2d(h_conv3, [2, 2], 2, padding='SAME')
-
- # with tf.variable_scope('feature_extractor_fc1'):
- # # fc_input = tf.nn.dropout(layers.flatten(h_conv3), keep_prob = 0.9)
- # fc_input = layers.flatten(h_conv3)
- # fc_1 = layers.fully_connected(inputs=fc_input, num_outputs=self.latent_dim,
- # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
-
- # self.features = fc_1
- # feature_shape = self.features.get_shape()
- # self.feature_dim = feature_shape[1].value
- # self.features_src = tf.slice(self.features, [0, 0], [self.batch_size, -1])
- # self.features_for_prediction = tf.cond(self.train, lambda: tf.slice(self.features, [0, 0], [self.batch_size, -1]), lambda: self.features)
-
-
- def feature_extractor_c(self, reuse = False):
- training = tf.cond(self.train, lambda: True, lambda: False)
- X = layers.instance_norm(self.X)
- with tf.variable_scope('feature_extractor_c', reuse = reuse):
- h_conv1 = layers.conv2d(self.X, self.ef_dim*3, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
- h_conv1 = layers.conv2d(h_conv1, self.ef_dim*3, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
- h_conv1 = layers.conv2d(h_conv1, self.ef_dim*3, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
- h_conv1 = layers.max_pool2d(h_conv1, 2, 2, padding='SAME')
- h_conv1 = noise(tf.layers.dropout(h_conv1, rate=0.5, training=training), phase=training)
-
-
-
- h_conv2 = layers.conv2d(h_conv1, self.ef_dim*6, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
- h_conv2 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
- h_conv2 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
- h_conv2 = layers.max_pool2d(h_conv2, 2, 2, padding='SAME')
- h_conv2 = noise(tf.layers.dropout(h_conv2, rate=0.5, training=training), phase=training)
-
-
-
- h_conv3 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
- h_conv3 = layers.conv2d(h_conv3, self.ef_dim*6, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
- h_conv3 = layers.conv2d(h_conv3, self.ef_dim*6, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
- h_conv3 = tf.reduce_mean(h_conv3, axis=[1, 2])
-
- self.features_c = h_conv3
- feature_shape = self.features_c.get_shape()
- self.feature_c_dim = feature_shape[1].value
- self.features_c_src = tf.slice(self.features_c, [0, 0], [self.batch_size, -1])
- self.features_c_for_prediction = tf.cond(self.train, lambda: tf.slice(self.features_c, [0, 0], [self.batch_size, -1]), lambda: self.features_c)
-
- def feature_extractor_d(self, reuse = False):
- training = tf.cond(self.train, lambda: True, lambda: False)
- X = layers.instance_norm(self.X)
- print('sss',self.X)
- with tf.variable_scope('feature_extractor_d', reuse = reuse):
- h_conv1 = layers.conv2d(self.X, self.ef_dim*3, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
- h_conv1 = layers.conv2d(h_conv1, self.ef_dim*3, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
- h_conv1 = layers.conv2d(h_conv1, self.ef_dim*3, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv1 = layers.batch_norm(h_conv1, activation_fn=leaky_relu)
- h_conv1 = layers.max_pool2d(h_conv1, 2, 2, padding='SAME')
- h_conv1 = noise(tf.layers.dropout(h_conv1, rate=0.5, training=training), phase=training)
-
-
-
- h_conv2 = layers.conv2d(h_conv1, self.ef_dim*6, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
- h_conv2 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
- h_conv2 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv2 = layers.batch_norm(h_conv2, activation_fn=leaky_relu)
- h_conv2 = layers.max_pool2d(h_conv2, 2, 2, padding='SAME')
- h_conv2 = noise(tf.layers.dropout(h_conv2, rate=0.5, training=training), phase=training)
-
-
-
- h_conv3 = layers.conv2d(h_conv2, self.ef_dim*6, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
- h_conv3 = layers.conv2d(h_conv3, self.ef_dim*6, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
- h_conv3 = layers.conv2d(h_conv3, self.ef_dim*6, 3, stride=1, padding='SAME',
- activation_fn=None, weights_initializer=self.initializer)
- h_conv3 = layers.batch_norm(h_conv3, activation_fn=leaky_relu)
- h_conv3 = tf.reduce_mean(h_conv3, axis=[1, 2])
-
- self.features_d = h_conv3
- # self.features_d_src = tf.slice(self.features_d, [0, 0], [self.batch_size, -1])
- # self.features_d_for_prediction = tf.cond(self.train, lambda: tf.slice(self.features_d, [0, 0], [self.batch_size, -1]), lambda: self.features_d)
-
- def mi_net(self, input_sample, reuse = False):
- with tf.variable_scope('mi_net', reuse=reuse):
- fc_1 = layers.fully_connected(inputs=input_sample, num_outputs=64, activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- fc_2 = layers.fully_connected(inputs=fc_1, num_outputs=1, activation_fn=None, weights_initializer=self.initializer)
- return fc_2
-
-
- def mine(self):
- # tmp_1 = tf.random_shuffle(tf.range(self.batch_size))
- # tmp_2 = tf.random_shuffle(tf.range(self.batch_size))
- # shuffle_d_1 = tf.gather(tf.slice(tf.identity(self.features_d), [0, 0], [self.batch_size, -1]), tmp_1)
- # shuffle_d_2 = tf.gather(tf.slice(tf.identity(self.features_d), [self.batch_size, 0], [self.batch_size, -1]), tmp_2)
- # self.shuffle_d = tf.concat([shuffle_d_1, shuffle_d_2], axis = 0)
- tmp = tf.random_shuffle(tf.range(self.batch_size*2))
- self.shuffle_d = tf.gather(self.features_d, tmp)
-
- input_0 = tf.concat([self.features_c,self.features_d], axis = -1)
- input_1 = tf.concat([self.features_c,self.shuffle_d], axis = -1)
-
- T_0 = self.mi_net(input_0)
- T_1 = self.mi_net(input_1, reuse=True)
-
- E_pos = math.log(2.) - tf.nn.softplus(-T_0)
- E_neg = tf.nn.softplus(-T_1) + T_1 - math.log(2.)
-
- # grad = tf.gradients(mi_l, [self.features_c, self.features_d, self.shuffle_d])
- # pdb.set_trace()
- # self.penalty = tf.reduce_mean(tf.square(tf.reduce_sum(tf.square(grad))-1.))
- self.bound = tf.reduce_mean(E_pos - E_neg)
-
-
- def club(self, reuse=False):
- with tf.variable_scope('mi_net', reuse=reuse):
- p_0 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- prediction = layers.fully_connected(inputs=p_0, num_outputs=int(self.features_d.shape[1]), activation_fn=None, weights_initializer=self.initializer)
-
- p_1 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- prediction_1 = layers.fully_connected(inputs=p_1, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.tanh, weights_initializer=self.initializer)
-
- mu = prediction
- logvar = prediction_1
-
- prediction_tile = tf.tile(tf.expand_dims(prediction, dim=1), tf.constant([1, self.batch_size*2, 1], tf.int32))
- features_d_tile = tf.tile(tf.expand_dims(self.features_d, dim=0), tf.constant([self.batch_size*2, 1, 1], tf.int32))
-
- positive = -(mu - self.features_d)**2/2./tf.exp(logvar)
- negative = -tf.reduce_mean((features_d_tile-prediction_tile)**2, 1)/2./tf.exp(logvar)
-
- # positive = -(prediction-self.features_d)**2
- # negative = -tf.reduce_mean((features_d_tile-prediction_tile)**2, 1)
-
- self.lld = tf.reduce_mean(tf.reduce_sum(positive, -1))
- self.bound = tf.reduce_mean(tf.reduce_sum(positive, -1)-tf.reduce_sum(negative, -1))
-
- def club_sample(self, reuse=False):
- with tf.variable_scope('mi_net', reuse=reuse):
- p_0 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- prediction = layers.fully_connected(inputs=p_0, num_outputs=int(self.features_d.shape[1]), activation_fn=None, weights_initializer=self.initializer)
-
- p_1 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- prediction_1 = layers.fully_connected(inputs=p_1, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.tanh, weights_initializer=self.initializer)
-
- mu = prediction
- logvar = prediction_1
-
- tmp = tf.random_shuffle(tf.range(self.batch_size*2))
- self.shuffle_d = tf.gather(self.features_d, tmp)
-
- positive = -(mu - self.features_d)**2/2./tf.exp(logvar)
- negative = -(mu - self.shuffle_d)**2/2./tf.exp(logvar)
-
- self.lld = tf.reduce_mean(tf.reduce_sum(positive, -1))
- self.bound = tf.reduce_mean(tf.reduce_sum(positive, -1)-tf.reduce_sum(negative, -1))
-
- def NWJ(self, reuse=False):
- features_c_tile = tf.tile(tf.expand_dims(self.features_c, dim=0), tf.constant([self.batch_size*2, 1, 1], tf.int32))
- features_d_tile = tf.tile(tf.expand_dims(self.features_d, dim=1), tf.constant([1, self.batch_size*2, 1], tf.int32))
- input_0 = tf.concat([self.features_c, self.features_d], axis = -1)
- input_1 = tf.concat([features_c_tile, features_d_tile], axis = -1)
-
- T_0 = self.mi_net(input_0)
- T_1 = self.mi_net(input_1, reuse=True) - 1.
-
- self.bound = tf.reduce_mean(T_0) - tf.reduce_mean(tf.exp(tf.reduce_logsumexp(T_1, 1) - math.log(self.batch_size*2)))
-
- def VUB(self, reuse=False):
- with tf.variable_scope('mi_net', reuse=reuse):
- p_0 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- prediction = layers.fully_connected(inputs=p_0, num_outputs=int(self.features_d.shape[1]), activation_fn=None, weights_initializer=self.initializer)
-
- p_1 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- prediction_1 = layers.fully_connected(inputs=p_1, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.tanh, weights_initializer=self.initializer)
-
- mu = prediction
- logvar = prediction_1
-
- self.lld = tf.reduce_mean(tf.reduce_sum(-(mu-self.features_d)**2 / tf.exp(logvar) - logvar, -1))
- self.bound = 1. / 2. * tf.reduce_mean(mu**2 + tf.exp(logvar) - 1. - logvar)
-
- def L1OutUB(self, reuse=False):
- with tf.variable_scope('mi_net', reuse=reuse):
- p_0 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- prediction = layers.fully_connected(inputs=p_0, num_outputs=int(self.features_d.shape[1]), activation_fn=None, weights_initializer=self.initializer)
-
- p_1 = layers.fully_connected(inputs=self.features_c, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- prediction_1 = layers.fully_connected(inputs=p_1, num_outputs=int(self.features_d.shape[1]), activation_fn=tf.nn.tanh, weights_initializer=self.initializer)
-
- mu = prediction
- logvar = prediction_1
-
- positive = tf.reduce_sum(-(mu - self.features_d)**2/2./tf.exp(logvar) - logvar/2., -1)
-
- prediction_tile = tf.tile(tf.expand_dims(prediction, dim=1), tf.constant([1, self.batch_size*2, 1], tf.int32))
- prediction_1_tile = tf.tile(tf.expand_dims(prediction_1, dim=1), tf.constant([1, self.batch_size*2, 1], tf.int32))
- features_d_tile = tf.tile(tf.expand_dims(self.features_d, dim=0), tf.constant([self.batch_size*2, 1, 1], tf.int32))
-
- all_probs = tf.reduce_sum(-(features_d_tile-prediction_tile)**2/2./tf.exp(prediction_1_tile) - prediction_1_tile/2., -1)
- diag_mask = tf.diag([-20.]*self.batch_size*2)
-
- negative = tf.reduce_logsumexp(all_probs + diag_mask, 0) - math.log(self.batch_size*2 - 1.)
- self.bound = tf.reduce_mean(positive-negative)
- self.lld = tf.reduce_mean(tf.reduce_sum(-(mu - self.features_d)**2/tf.exp(logvar) - logvar, -1))
-
-
- def nce(self):
-
- features_c_tile = tf.tile(tf.expand_dims(self.features_c, dim=0), tf.constant([self.batch_size*2, 1, 1], tf.int32))
- features_d_tile = tf.tile(tf.expand_dims(self.features_d, dim=1), tf.constant([1, self.batch_size*2, 1], tf.int32))
- input_0 = tf.concat([self.features_c, self.features_d], axis = -1)
- input_1 = tf.concat([features_c_tile, features_d_tile], axis = -1)
-
- T_0 = self.mi_net(input_0)
- T_1 = tf.reduce_mean(self.mi_net(input_1, reuse=True), axis=1)
-
- E_pos = math.log(2.) - tf.nn.softplus(-T_0)
- E_neg = tf.nn.softplus(-T_1) + T_1 - math.log(2.)
-
- self.bound = tf.reduce_mean(E_pos - E_neg)
-
-
- def label_predictor(self):
- # with tf.variable_scope('label_predictor_fc1'):
- # fc_1 = layers.fully_connected(inputs=self.features_for_prediction, num_outputs=self.latent_dim,
- # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- with tf.variable_scope('label_predictor_logits'):
- logits = layers.fully_connected(inputs=self.features_c_for_prediction, num_outputs=self.num_labels,
- activation_fn=None, weights_initializer=self.initializer)
-
- self.y_pred = tf.nn.softmax(logits)
- self.y_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = self.y))
- self.y_acc = utils.predictor_accuracy(self.y_pred, self.y)
-
-
- def domain_predictor(self, reuse = False):
- with tf.variable_scope('domain_predictor_fc1', reuse = reuse):
- fc_1 = layers.fully_connected(inputs=self.features_d, num_outputs=self.latent_dim,
- activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- with tf.variable_scope('domain_predictor_logits', reuse = reuse):
- self.d_logits = layers.fully_connected(inputs=fc_1, num_outputs=self.num_domains,
- activation_fn=None, weights_initializer=self.initializer)
-
-
- logits_real = tf.slice(self.d_logits, [0, 0], [self.batch_size, -1])
- logits_fake = tf.slice(self.d_logits, [self.batch_size, 0], [self.batch_size * self.num_targets, -1])
-
- label_real = tf.slice(self.domains, [0, 0], [self.batch_size, -1])
- label_fake = tf.slice(self.domains, [self.batch_size, 0], [self.batch_size * self.num_targets, -1])
- label_pseudo = tf.ones(label_fake.shape) - label_fake
-
- self.d_pred = tf.nn.sigmoid(self.d_logits)
- real_d_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = logits_real, labels = label_real))
- fake_d_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = logits_fake, labels = label_fake))
- self.d_loss = real_d_loss + self.reg_tgt * fake_d_loss
- self.d_acc = utils.predictor_accuracy(self.d_pred, self.domains)
-
- # def domain_test(self, reuse=False):
- # with tf.variable_scope('domain_test_fc1', reuse = reuse):
- # fc_1 = layers.fully_connected(inputs=self.features, num_outputs=self.latent_dim,
- # activation_fn=tf.nn.relu, weights_initializer=self.initializer)
- # with tf.variable_scope('domain_test_logits', reuse = reuse):
- # d_logits = layers.fully_connected(inputs=fc_1, num_outputs=self.num_domains,
- # activation_fn=None, weights_initializer=self.initializer)
-
- # logits_real = tf.slice(d_logits, [0, 0], [self.batch_size, -1])
- # logits_fake = tf.slice(d_logits, [self.batch_size, 0], [self.batch_size * self.num_targets, -1])
-
- # self.test_pq = tf.nn.softmax(d_logits)
-
- # label_real = tf.slice(self.domains, [0, 0], [self.batch_size, -1])
- # label_fake = tf.slice(self.domains, [self.batch_size, 0], [self.batch_size * self.num_targets, -1])
-
- # real_d_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = logits_real, labels = label_real))
- # fake_d_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = logits_fake, labels = label_fake))
- # self.d_test = real_d_loss + fake_d_loss
-
- # def distance(self, a, b):
- # a_matrix = tf.tile(tf.expand_dims(a, 0), [a.shape[0], 1, 1])
- # b_matrix = tf.tile(tf.expand_dims(b, 0), [b.shape[0], 1, 1])
- # b_matrix = tf.transpose(b_matrix, [1,0,2])
- # distance = K.sqrt(K.maximum(K.sum(K.square(a_matrix - b_matrix), axis=2), K.epsilon()))
- # return distance
-
- # def calculate_mask(self, idx, idx_2):
- # idx_matrix = tf.tile(tf.expand_dims(idx, 0), [idx.shape[0], 1])
- # idx_2_matrix = tf.tile(tf.expand_dims(idx_2, 0), [idx_2.shape[0], 1])
- # idx_2_transpose = tf.transpose(idx_2_matrix, [1,0])
- # mask = tf.cast(tf.equal(idx_matrix, idx_2_transpose), tf.float32)
- # return mask
-
- # def contrastive_loss(self, y_true, y_pred, hinge=1.0):
- # margin = hinge
- # sqaure_pred = K.square(y_pred)
- # margin_square = K.square(K.maximum(margin - y_pred, 0))
- # return K.mean(y_true * sqaure_pred + (1 - y_true) * margin_square)
-
- def _build_model(self):
- self.feature_extractor_c()
- self.feature_extractor_d()
- self.label_predictor()
- self.domain_predictor()
- self.club()
- # self.domain_test()
-
- # self.src_pred = tf.argmax(tf.slice(self.y_pred, [0, 0], [self.batch_size, -1]), axis=-1)
- # self.distance = self.distance(self.features_src, self.features_src)
- # self.batch_compare = self.calculate_mask(self.src_pred, self.src_pred)
- # self.con_loss = self.contrastive_loss(self.batch_compare, self.distance)
-
- self.context_loss = self.y_loss + 0.1 * self.bound# + self.reg_con*self.con_loss
- self.domain_loss = self.d_loss
-
- def _setup_train_ops(self):
- context_vars = utils.vars_from_scopes(['feature_extractor_c', 'label_predictor'])
- domain_vars = utils.vars_from_scopes(['feature_extractor_d', 'domain_predictor'])
- mi_vars = utils.vars_from_scopes(['mi_net'])
- self.domain_test_vars = utils.vars_from_scopes(['domain_test'])
-
- # 源代码 没加LossScale
- self.train_context_ops = tf.train.AdamOptimizer(self.lr_g,0.5).minimize(self.context_loss, var_list = context_vars)
- self.train_domain_ops = tf.train.AdamOptimizer(self.lr_d,0.5).minimize(self.domain_loss, var_list = domain_vars)
- self.train_mi_ops = tf.train.AdamOptimizer(self.lr_d,0.5).minimize(-self.lld, var_list = mi_vars)
- # self.test_domain_ops = tf.train.AdamOptimizer(self.lr_d,0.5).minimize(self.d_test, var_list = self.domain_test_vars)
-
- # # 添加LossScale后
- # trainContextOps = tf.train.AdamOptimizer(self.lr_g,0.5)
- # trainDomainOps = tf.train.AdamOptimizer(self.lr_d,0.5)
- # trainMiOps = tf.train.AdamOptimizer(self.lr_d,0.5)
- #
- # loss_scale_manager1 = ExponentialUpdateLossScaleManager(init_loss_scale=2**32, incr_every_n_steps=1000, decr_every_n_nan_or_inf=2, decr_ratio=0.5)
- # self.train_context_ops = NPULossScaleOptimizer(trainContextOps,loss_scale_manager1).minimize(self.context_loss, var_list = context_vars)
- #
- # loss_scale_manager2 = ExponentialUpdateLossScaleManager(init_loss_scale=2 ** 32, incr_every_n_steps=1000, decr_every_n_nan_or_inf=2, decr_ratio=0.5)
- # self.train_domain_ops = NPULossScaleOptimizer(trainDomainOps,loss_scale_manager2).minimize(self.domain_loss, var_list = domain_vars)
- #
- # loss_scale_manager3 = ExponentialUpdateLossScaleManager(init_loss_scale=2 ** 32, incr_every_n_steps=1000, decr_every_n_nan_or_inf=2, decr_ratio=0.5)
- # self.train_mi_ops = NPULossScaleOptimizer(trainMiOps,loss_scale_manager3).minimize(-self.lld, var_list = mi_vars)
-
-
-
-
-
--
Gitee
From 78b71da9836952f71d678a69229ac5c60e7fd3df Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BF=83=E6=80=9D=E7=BC=A0=E7=BB=B5?= <869083529@qq.com>
Date: Wed, 20 Jul 2022 09:48:35 +0000
Subject: [PATCH 15/16] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20Te?=
=?UTF-8?q?nsorFlow/contrib/cv/club/CLUB=5Ftf=5Fwubo9826/Readme.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../cv/club/CLUB_tf_wubo9826/Readme.md | 127 ------------------
1 file changed, 127 deletions(-)
delete mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/Readme.md
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/Readme.md b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/Readme.md
deleted file mode 100644
index 9c4fb7ff1..000000000
--- a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/Readme.md
+++ /dev/null
@@ -1,127 +0,0 @@
-- [基本信息](#基本信息.md)
-- [概述](#概述.md)
-- [Requirements](#requirements)
-- [数据集](#数据集)
-- [代码及路径解释](#代码及路径解释)
-- [Running the code](#running-the-code)
- - [run script](#run-command)
- - [Training Log](#training-log)
-
-基本信息
-
-**发布者(Publisher):Huawei**
-
-**应用领域(Application Domain): cv**
-
-**版本(Version):1.1**
-
-**修改时间(Modified) :2022.7.20**
-
-**框架(Framework):TensorFlow 1.15.0**
-
-**模型格式(Model Format):ckpt**
-
-**精度(Precision):Mixed**
-
-**处理器(Processor):昇腾910**
-
-**应用级别(Categories):Research**
-
-**描述(Description):基于TensorFlow框架的互信息对比对数比上界(CLUB)的图像分类网络训练代码**
-
-概述
-
-通过互信息最小化在领域适应(Domain Adaptation)实验证了互信息对比对数比上界(CLUB)的方法,提供了一个互信息和广泛的机器学习训练策略之间的联系。
-
-- 参考论文:
-
- https://github.com/Linear95/CLUB
-
- http://proceedings.mlr.press/v119/cheng20b/cheng20b.pdf
-
-## Requirements
-
-- python 3.7
-- tensorflow 1.15.0
-- numpy
-- scikit-learn
-- math
-- scikit-learn
-- opencv-python
-- scipy
-
-- Ascend: 1*Ascend 910 CPU: 24vCPUs 96GB
-
-```python
-镜像:ascend-share/5.1.rc1.alpha005_tensorflow-ascend910-cp37-euleros2.8-aarch64-training:1.15.0-21.0.2_0401
-```
-
-### 数据集
-
- **SVHN 数据集**
-SVHN(Street View House Number)Dateset 来源于谷歌街景门牌号码,每张图片中包含一组 ‘0-9’ 的阿拉伯数字。获取链接:
-
-```
-链接:https://pan.baidu.com/s/1gvfAMQ-PAj-QXAz6q3g93Q
-提取码:j5lc
-```
-
- **MNIST数据集**
-
-MNIST是一个手写体数字的图片数据集,该数据集来由美国国家标准与技术研究所(National Institute of Standards and Technology (NIST))发起整理,一共统计了来自250个不同的人手写数字图片,其中50%是高中生,50%来自人口普查局的工作人员。该数据集的收集目的是希望通过算法,实现对手写数字的识别。获取链接:
-
-```
-链接:https://pan.baidu.com/s/1-2rTGOUQo9O_aLDPGoyn5Q
-提取码:7sxl
-```
-
-### 代码及路径解释
-
-```
-TF-CLUB
-└─
- ├─README.md
- ├─MI_DA 模型代码
- ├─imageloader.py 数据集加载脚本
- ├─main_DANN.py 模型训练脚本
- ├─MNISTModel_DANN.py 模型脚本
- ├─utils.py 实体脚本文件
-```
-
-## Running the code
-
-### Run command
-
-#### GPU
-
-```python
-python main_DANN.py --data_path /path/to/data_folder/ --save_path /path/to/save_dir/ --source svhn --target mnist
-```
-
-#### ModelArts
-
-```
-框架:tensorflow1.15
-NPU: 1*Ascend 910 CPU: 24vCPUs 96GB
-镜像:ascend-share/5.1.rc1.alpha005_tensorflow-ascend910-cp37-euleros2.8-aarch64-training:1.15.0-21.0.2_0401
-OBS Path:/cann-id1254/
-Data Path in OBS:/cann-id1254/dataset/
-Debugger:不勾选
-```
-
-### Training log
-
-#### 精度结果
-
-- GPU(V100)结果
-
- Source (svhn)
- Target (Mnist)
- p_acc: 0.9688
-
-- NPU结果
-
- Source (svhn)
- Target (Mnist)
- p_acc: 0.9688
-
--
Gitee
From 397b5c8dd974088ba9b6ead46d5a138c2df39b84 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BF=83=E6=80=9D=E7=BC=A0=E7=BB=B5?= <869083529@qq.com>
Date: Wed, 20 Jul 2022 09:48:56 +0000
Subject: [PATCH 16/16] readme
---
.../cv/club/CLUB_tf_wubo9826/Readme.md | 131 ++++++++++++++++++
1 file changed, 131 insertions(+)
create mode 100644 TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/Readme.md
diff --git a/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/Readme.md b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/Readme.md
new file mode 100644
index 000000000..0bde30a7b
--- /dev/null
+++ b/TensorFlow/contrib/cv/club/CLUB_tf_wubo9826/Readme.md
@@ -0,0 +1,131 @@
+- [基本信息](#基本信息.md)
+- [概述](#概述.md)
+- [Requirements](#requirements)
+- [数据集](#数据集)
+- [代码及路径解释](#代码及路径解释)
+- [Running the code](#running-the-code)
+ - [run script](#run-command)
+ - [Training Log](#training-log)
+
+基本信息
+
+**发布者(Publisher):Huawei**
+
+**应用领域(Application Domain): cv**
+
+**版本(Version):1.1**
+
+**修改时间(Modified) :2022.7.20**
+
+**框架(Framework):TensorFlow 1.15.0**
+
+**模型格式(Model Format):ckpt**
+
+**精度(Precision):Mixed**
+
+**处理器(Processor):昇腾910**
+
+**应用级别(Categories):Research**
+
+**描述(Description):基于TensorFlow框架的互信息对比对数比上界(CLUB)的图像分类网络训练代码**
+
+概述
+
+通过互信息最小化在领域适应(Domain Adaptation)实验证了互信息对比对数比上界(CLUB)的方法,提供了一个互信息和广泛的机器学习训练策略之间的联系。
+
+- 参考论文:
+
+ https://github.com/Linear95/CLUB
+
+ http://proceedings.mlr.press/v119/cheng20b/cheng20b.pdf
+
+## Requirements
+
+- python 3.7
+- tensorflow 1.15.0
+- numpy
+- scikit-learn
+- math
+- scikit-learn
+- opencv-python
+- scipy
+
+- Ascend: 1*Ascend 910 CPU: 24vCPUs 96GB
+
+```python
+镜像:ascend-share/5.1.rc1.alpha005_tensorflow-ascend910-cp37-euleros2.8-aarch64-training:1.15.0-21.0.2_0401
+```
+
+### 数据集
+
+ **SVHN 数据集**
+SVHN(Street View House Number)Dateset 来源于谷歌街景门牌号码,每张图片中包含一组 ‘0-9’ 的阿拉伯数字。获取链接:
+
+```
+链接:https://pan.baidu.com/s/1gvfAMQ-PAj-QXAz6q3g93Q
+提取码:j5lc
+```
+
+ **MNIST数据集**
+
+MNIST是一个手写体数字的图片数据集,该数据集来由美国国家标准与技术研究所(National Institute of Standards and Technology (NIST))发起整理,一共统计了来自250个不同的人手写数字图片,其中50%是高中生,50%来自人口普查局的工作人员。该数据集的收集目的是希望通过算法,实现对手写数字的识别。获取链接:
+
+```
+链接:https://pan.baidu.com/s/1-2rTGOUQo9O_aLDPGoyn5Q
+提取码:7sxl
+```
+
+### 代码及路径解释
+
+```
+TF-CLUB
+└─
+ ├─README.md
+ ├─MI_DA 模型代码
+ ├─imageloader.py 数据集加载脚本
+ ├─main_DANN.py 模型训练脚本
+ ├─MNISTModel_DANN.py 模型脚本
+ ├─utils.py 实体脚本文件
+```
+
+## Running the code
+
+### Run command
+
+#### GPU
+
+```python
+python main_DANN.py --data_path /path/to/data_folder/ --save_path /path/to/save_dir/ --source svhn --target mnist
+```
+
+#### ModelArts
+
+```
+框架:tensorflow1.15
+NPU: 1*Ascend 910 CPU: 24vCPUs 96GB
+镜像:ascend-share/5.1.rc1.alpha005_tensorflow-ascend910-cp37-euleros2.8-aarch64-training:1.15.0-21.0.2_0401
+OBS Path:/cann-id1254/
+Data Path in OBS:/cann-id1254/dataset/
+Debugger:不勾选
+```
+
+### Training log
+
+#### 精度结果
+
+- GPU(V100)结果
+
+ Source (svhn)
+ Target (Mnist)
+ p_acc: 0.9688
+
+- NPU结果
+
+ Source (svhn)
+ Target (Mnist)
+ p_acc: 0.9688
+
+数据集:obs://cann-id1254/dataset
+
+精度达标训练日志及代码:obs://cann-id1254/MA-new-CLUB-NPU-05-20-10-56
+
--
Gitee