diff --git a/ACL_TensorFlow/contrib/cv/Slot-Attention_ID2028_for_ACL/README.md b/ACL_TensorFlow/contrib/cv/Slot-Attention_ID2028_for_ACL/README.md new file mode 100644 index 0000000000000000000000000000000000000000..2b7511abe81e13ba6ae2441ab545787d13da98ac --- /dev/null +++ b/ACL_TensorFlow/contrib/cv/Slot-Attention_ID2028_for_ACL/README.md @@ -0,0 +1,65 @@ +

概述

+提出了Slot Attention 模块,建立了感知表征 (perceptual representations, 如CNN 输出) 与 slots 之间的桥梁 (Feature map/Grid → Set of slots) + + +- 参考论文: + + @article{locatello2020object, + title={Object-Centric Learning with Slot Attention}, + author={Locatello, Francesco and Weissenborn, Dirk and Unterthiner, Thomas and Mahendran, Aravindh and Heigold, Georg and Uszkoreit, Jakob and Dosovitskiy, Alexey and Kipf, Thomas}, + journal={arXiv preprint arXiv:2006.15055}, + year={2020} +} + +- 参考实现: + + https://github.com/google-research/google-research/tree/master/slot_attention + +- 适配昇腾 AI 处理器的实现: + +https://gitee.com/ascend/ModelZoo-TensorFlow/tree/master/TensorFlow/contrib/cv/Slot-Attention_ID2028_for_TensorFlow + +

原始模型

+ +obs地址:obs://lwr-slot-npu/slottl/newslotmodel.pb + + +步骤一: +通过代码keras_frozen_graph将ckpt-499000转成pb +ckpt的obs地址:obs://lwr-slot-npu/slottl/ +该目录中以checkpoint开头的四个文件 + + +

pb模型

+ +``` +newslotmodel.pb +``` +pb文件的obs地址:obs://lwr-slot-npu/slottl/newslotmodel.pb + + +

om模型

+ +转newslotmodel.pb到slotmodel.om + +使用ATC模型转换工具进行模型转换时可以参考如下指令: + +``` +atc --model=./newslotmodel.pb --input_shape="input:64, 128, 128, 3" --framework=3 --output=slotmodel --soc_version=Ascend910A --precision_mode=force_fp32 --op_select_implmode=high_precision +``` + +成功转化成slotmodel.om + +slotmodel.om的obs地址:obs://lwr-slot-npu/slottl/slotmodel.om + + + +

使用msame工具推理

+ +参考 https://gitee.com/ascend/tools/tree/master/msame, 获取msame推理工具及使用方法。 + +使用msame推理工具,参考如下命令,发起推理测试: + +``` +./msame --model "slotmodel.om" --output "./" --outfmt TXT --loop 1 +``` \ No newline at end of file diff --git a/ACL_TensorFlow/contrib/cv/Slot-Attention_ID2028_for_ACL/keras_frozen_graph.py b/ACL_TensorFlow/contrib/cv/Slot-Attention_ID2028_for_ACL/keras_frozen_graph.py new file mode 100644 index 0000000000000000000000000000000000000000..9a38dd9e520386d5d85318406a61826f550fc37f --- /dev/null +++ b/ACL_TensorFlow/contrib/cv/Slot-Attention_ID2028_for_ACL/keras_frozen_graph.py @@ -0,0 +1,26 @@ +import tensorflow as tf +from tensorflow.python.framework import graph_util +import model as model_utils + +inputs = tf.placeholder(tf.float32, shape=[64, 128, 128, 3],name='input') # input shape + # create inference graph +resolution = (128, 128) +batch_size=64 +num_slots=7 +num_iterations=3 + +model = model_utils.build_model(resolution, batch_size, num_slots, + num_iterations, model_type="object_discovery") +logit1,logit2,logit3,logit4 = model(inputs, training=False) +print("-----------------------------------------测试完成-------------------------------------------") +saver = tf.train.Saver(max_to_keep=5) +# graph_def = tf.get_default_graph().as_graph_def() + +with tf.Session() as sess: + saver.restore(sess, '/home/disk/checkp/checkpoint.ckpt-499000') + print("---------------------开始转换-------------------------------") + output_graph_def = graph_util.convert_variables_to_constants(sess,sess.graph_def,['model/slot_attention_auto_encoder/Sum']) + print("---------------------转换完成--------------------------") + # print(sess.run(tf.get_default_graph().get_tensor_by_name('model/slot_attention_auto_encoder/Sum:0'))) # 3.0 + with tf.gfile.GFile('./newslotmodel.pb', 'wb') as f: + f.write(output_graph_def.SerializeToString()) # 得到文件:model.pb